00:00:00.001 Started by upstream project "autotest-per-patch" build number 132737 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.119 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.179 Fetching changes from the remote Git repository 00:00:00.182 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.232 Using shallow fetch with depth 1 00:00:00.232 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.232 > git --version # timeout=10 00:00:00.276 > git --version # 'git version 2.39.2' 00:00:00.276 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.304 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.304 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.824 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.835 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.848 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.848 > git config core.sparsecheckout # timeout=10 00:00:06.859 > git read-tree -mu HEAD # timeout=10 00:00:06.874 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.894 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.894 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.976 [Pipeline] Start of Pipeline 00:00:06.989 [Pipeline] library 00:00:06.991 Loading library shm_lib@master 00:00:06.991 Library shm_lib@master is cached. Copying from home. 00:00:07.007 [Pipeline] node 00:00:07.019 Running on WFP37 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.020 [Pipeline] { 00:00:07.031 [Pipeline] catchError 00:00:07.032 [Pipeline] { 00:00:07.041 [Pipeline] wrap 00:00:07.047 [Pipeline] { 00:00:07.051 [Pipeline] stage 00:00:07.053 [Pipeline] { (Prologue) 00:00:07.269 [Pipeline] sh 00:00:07.548 + logger -p user.info -t JENKINS-CI 00:00:07.568 [Pipeline] echo 00:00:07.570 Node: WFP37 00:00:07.576 [Pipeline] sh 00:00:07.872 [Pipeline] setCustomBuildProperty 00:00:07.884 [Pipeline] echo 00:00:07.885 Cleanup processes 00:00:07.889 [Pipeline] sh 00:00:08.167 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.167 3508850 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.180 [Pipeline] sh 00:00:08.464 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.464 ++ grep -v 'sudo pgrep' 00:00:08.464 ++ awk '{print $1}' 00:00:08.464 + sudo kill -9 00:00:08.464 + true 00:00:08.477 [Pipeline] cleanWs 00:00:08.486 [WS-CLEANUP] Deleting project workspace... 00:00:08.486 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.492 [WS-CLEANUP] done 00:00:08.496 [Pipeline] setCustomBuildProperty 00:00:08.509 [Pipeline] sh 00:00:08.788 + sudo git config --global --replace-all safe.directory '*' 00:00:08.877 [Pipeline] httpRequest 00:00:09.235 [Pipeline] echo 00:00:09.237 Sorcerer 10.211.164.101 is alive 00:00:09.247 [Pipeline] retry 00:00:09.249 [Pipeline] { 00:00:09.265 [Pipeline] httpRequest 00:00:09.270 HttpMethod: GET 00:00:09.270 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.270 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.285 Response Code: HTTP/1.1 200 OK 00:00:09.285 Success: Status code 200 is in the accepted range: 200,404 00:00:09.286 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.991 [Pipeline] } 00:00:15.008 [Pipeline] // retry 00:00:15.016 [Pipeline] sh 00:00:15.298 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.315 [Pipeline] httpRequest 00:00:15.699 [Pipeline] echo 00:00:15.701 Sorcerer 10.211.164.101 is alive 00:00:15.711 [Pipeline] retry 00:00:15.713 [Pipeline] { 00:00:15.728 [Pipeline] httpRequest 00:00:15.733 HttpMethod: GET 00:00:15.733 URL: http://10.211.164.101/packages/spdk_f9a92382fc5a95d2cf9b56626020943647bd15fc.tar.gz 00:00:15.734 Sending request to url: http://10.211.164.101/packages/spdk_f9a92382fc5a95d2cf9b56626020943647bd15fc.tar.gz 00:00:15.749 Response Code: HTTP/1.1 200 OK 00:00:15.749 Success: Status code 200 is in the accepted range: 200,404 00:00:15.750 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_f9a92382fc5a95d2cf9b56626020943647bd15fc.tar.gz 00:00:47.393 [Pipeline] } 00:00:47.410 [Pipeline] // retry 00:00:47.417 [Pipeline] sh 00:00:47.700 + tar --no-same-owner -xf spdk_f9a92382fc5a95d2cf9b56626020943647bd15fc.tar.gz 00:00:50.296 [Pipeline] sh 00:00:50.575 + git -C spdk log --oneline -n5 00:00:50.575 f9a92382f bdev/compress: Simplify split logic for unmap operation 00:00:50.575 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:00:50.575 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:00:50.575 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:00:50.575 e2dfdf06c accel/mlx5: Register post_poller handler 00:00:50.583 [Pipeline] } 00:00:50.594 [Pipeline] // stage 00:00:50.602 [Pipeline] stage 00:00:50.604 [Pipeline] { (Prepare) 00:00:50.617 [Pipeline] writeFile 00:00:50.631 [Pipeline] sh 00:00:50.911 + logger -p user.info -t JENKINS-CI 00:00:50.923 [Pipeline] sh 00:00:51.204 + logger -p user.info -t JENKINS-CI 00:00:51.215 [Pipeline] sh 00:00:51.494 + cat autorun-spdk.conf 00:00:51.494 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.494 SPDK_TEST_NVMF=1 00:00:51.494 SPDK_TEST_NVME_CLI=1 00:00:51.494 SPDK_TEST_NVMF_NICS=mlx5 00:00:51.494 SPDK_RUN_UBSAN=1 00:00:51.494 NET_TYPE=phy 00:00:51.500 RUN_NIGHTLY=0 00:00:51.504 [Pipeline] readFile 00:00:51.526 [Pipeline] withEnv 00:00:51.528 [Pipeline] { 00:00:51.540 [Pipeline] sh 00:00:51.822 + set -ex 00:00:51.822 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:51.822 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:51.822 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.822 ++ SPDK_TEST_NVMF=1 00:00:51.822 ++ SPDK_TEST_NVME_CLI=1 00:00:51.822 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:51.822 ++ SPDK_RUN_UBSAN=1 00:00:51.822 ++ NET_TYPE=phy 00:00:51.822 ++ RUN_NIGHTLY=0 00:00:51.822 + case $SPDK_TEST_NVMF_NICS in 00:00:51.822 + DRIVERS=mlx5_ib 00:00:51.822 + [[ -n mlx5_ib ]] 00:00:51.822 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:51.822 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:58.395 rmmod: ERROR: Module irdma is not currently loaded 00:00:58.395 rmmod: ERROR: Module i40iw is not currently loaded 00:00:58.395 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:58.395 + true 00:00:58.395 + for D in $DRIVERS 00:00:58.395 + sudo modprobe mlx5_ib 00:00:58.395 + exit 0 00:00:58.405 [Pipeline] } 00:00:58.424 [Pipeline] // withEnv 00:00:58.429 [Pipeline] } 00:00:58.444 [Pipeline] // stage 00:00:58.455 [Pipeline] catchError 00:00:58.457 [Pipeline] { 00:00:58.474 [Pipeline] timeout 00:00:58.475 Timeout set to expire in 1 hr 0 min 00:00:58.477 [Pipeline] { 00:00:58.492 [Pipeline] stage 00:00:58.494 [Pipeline] { (Tests) 00:00:58.509 [Pipeline] sh 00:00:58.795 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:58.795 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:58.795 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:58.795 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:58.795 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:58.795 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:58.795 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:58.795 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:58.795 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:58.795 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:58.795 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:00:58.795 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:58.795 + source /etc/os-release 00:00:58.795 ++ NAME='Fedora Linux' 00:00:58.795 ++ VERSION='39 (Cloud Edition)' 00:00:58.795 ++ ID=fedora 00:00:58.795 ++ VERSION_ID=39 00:00:58.795 ++ VERSION_CODENAME= 00:00:58.795 ++ PLATFORM_ID=platform:f39 00:00:58.795 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:58.795 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:58.795 ++ LOGO=fedora-logo-icon 00:00:58.795 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:58.795 ++ HOME_URL=https://fedoraproject.org/ 00:00:58.795 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:58.795 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:58.795 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:58.795 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:58.795 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:58.795 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:58.795 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:58.795 ++ SUPPORT_END=2024-11-12 00:00:58.795 ++ VARIANT='Cloud Edition' 00:00:58.795 ++ VARIANT_ID=cloud 00:00:58.795 + uname -a 00:00:58.795 Linux spdk-wfp-37 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:58.795 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:01.331 Hugepages 00:01:01.331 node hugesize free / total 00:01:01.331 node0 1048576kB 0 / 0 00:01:01.331 node0 2048kB 0 / 0 00:01:01.331 node1 1048576kB 0 / 0 00:01:01.331 node1 2048kB 0 / 0 00:01:01.331 00:01:01.331 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.331 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:01.331 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:01.331 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:01.331 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:01.331 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:01.331 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:01.331 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:01.331 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:01.331 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:01.331 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:01.331 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:01.331 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:01.331 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:01.331 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:01.331 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:01.331 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:01.331 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:01.331 + rm -f /tmp/spdk-ld-path 00:01:01.331 + source autorun-spdk.conf 00:01:01.331 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.331 ++ SPDK_TEST_NVMF=1 00:01:01.331 ++ SPDK_TEST_NVME_CLI=1 00:01:01.331 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:01.331 ++ SPDK_RUN_UBSAN=1 00:01:01.331 ++ NET_TYPE=phy 00:01:01.331 ++ RUN_NIGHTLY=0 00:01:01.331 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.331 + [[ -n '' ]] 00:01:01.331 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:01.331 + for M in /var/spdk/build-*-manifest.txt 00:01:01.331 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:01.331 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:01.331 + for M in /var/spdk/build-*-manifest.txt 00:01:01.331 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.331 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:01.331 + for M in /var/spdk/build-*-manifest.txt 00:01:01.331 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.331 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:01.331 ++ uname 00:01:01.331 + [[ Linux == \L\i\n\u\x ]] 00:01:01.331 + sudo dmesg -T 00:01:01.331 + sudo dmesg --clear 00:01:01.331 + dmesg_pid=3509803 00:01:01.331 + [[ Fedora Linux == FreeBSD ]] 00:01:01.331 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.331 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.331 + sudo dmesg -Tw 00:01:01.331 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.331 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.331 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.331 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.331 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.331 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.331 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.331 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.331 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.331 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.331 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.331 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.331 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:01.331 16:13:55 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:01.331 16:13:55 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:01.331 16:13:55 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.331 16:13:55 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:01.331 16:13:55 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:01.331 16:13:55 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:01.331 16:13:55 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:01.331 16:13:55 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:01:01.331 16:13:55 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:01:01.331 16:13:55 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:01.331 16:13:55 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:01.331 16:13:56 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:01.331 16:13:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:01.331 16:13:56 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:01.331 16:13:56 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.331 16:13:56 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.331 16:13:56 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.331 16:13:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.331 16:13:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.331 16:13:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.331 16:13:56 -- paths/export.sh@5 -- $ export PATH 00:01:01.331 16:13:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.331 16:13:56 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:01.331 16:13:56 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:01.331 16:13:56 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733498036.XXXXXX 00:01:01.590 16:13:56 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733498036.UO9eyn 00:01:01.590 16:13:56 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:01.590 16:13:56 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:01.590 16:13:56 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:01.590 16:13:56 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.590 16:13:56 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.590 16:13:56 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:01.590 16:13:56 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:01.590 16:13:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.591 16:13:56 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:01.591 16:13:56 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:01.591 16:13:56 -- pm/common@17 -- $ local monitor 00:01:01.591 16:13:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.591 16:13:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.591 16:13:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.591 16:13:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.591 16:13:56 -- pm/common@25 -- $ sleep 1 00:01:01.591 16:13:56 -- pm/common@21 -- $ date +%s 00:01:01.591 16:13:56 -- pm/common@21 -- $ date +%s 00:01:01.591 16:13:56 -- pm/common@21 -- $ date +%s 00:01:01.591 16:13:56 -- pm/common@21 -- $ date +%s 00:01:01.591 16:13:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733498036 00:01:01.591 16:13:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733498036 00:01:01.591 16:13:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733498036 00:01:01.591 16:13:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733498036 00:01:01.591 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733498036_collect-cpu-temp.pm.log 00:01:01.591 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733498036_collect-vmstat.pm.log 00:01:01.591 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733498036_collect-cpu-load.pm.log 00:01:01.591 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733498036_collect-bmc-pm.bmc.pm.log 00:01:02.527 16:13:57 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:02.527 16:13:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.527 16:13:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.527 16:13:57 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:02.527 16:13:57 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.527 Fri Dec 6 03:13:57 PM UTC 2024 00:01:02.527 16:13:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.527 v25.01-pre-304-gf9a92382f 00:01:02.527 16:13:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:02.527 16:13:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.527 16:13:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.527 16:13:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:02.527 16:13:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:02.527 16:13:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.527 ************************************ 00:01:02.527 START TEST ubsan 00:01:02.527 ************************************ 00:01:02.527 16:13:57 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:02.527 using ubsan 00:01:02.527 00:01:02.527 real 0m0.000s 00:01:02.527 user 0m0.000s 00:01:02.527 sys 0m0.000s 00:01:02.527 16:13:57 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:02.527 16:13:57 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.527 ************************************ 00:01:02.527 END TEST ubsan 00:01:02.527 ************************************ 00:01:02.527 16:13:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.527 16:13:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.527 16:13:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.527 16:13:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.527 16:13:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.527 16:13:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.527 16:13:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.527 16:13:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.527 16:13:57 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:02.527 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:02.527 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:03.095 Using 'verbs' RDMA provider 00:01:15.954 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:25.939 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:25.939 Creating mk/config.mk...done. 00:01:25.939 Creating mk/cc.flags.mk...done. 00:01:25.939 Type 'make' to build. 00:01:25.939 16:14:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:25.939 16:14:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.939 16:14:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.939 16:14:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.939 ************************************ 00:01:25.939 START TEST make 00:01:25.939 ************************************ 00:01:25.939 16:14:20 make -- common/autotest_common.sh@1129 -- $ make -j112 00:01:26.561 make[1]: Nothing to be done for 'all'. 00:01:34.673 The Meson build system 00:01:34.673 Version: 1.5.0 00:01:34.673 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:34.673 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:34.673 Build type: native build 00:01:34.673 Program cat found: YES (/usr/bin/cat) 00:01:34.673 Project name: DPDK 00:01:34.673 Project version: 24.03.0 00:01:34.673 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:34.673 C linker for the host machine: cc ld.bfd 2.40-14 00:01:34.673 Host machine cpu family: x86_64 00:01:34.673 Host machine cpu: x86_64 00:01:34.673 Message: ## Building in Developer Mode ## 00:01:34.673 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:34.674 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:34.674 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:34.674 Program python3 found: YES (/usr/bin/python3) 00:01:34.674 Program cat found: YES (/usr/bin/cat) 00:01:34.674 Compiler for C supports arguments -march=native: YES 00:01:34.674 Checking for size of "void *" : 8 00:01:34.674 Checking for size of "void *" : 8 (cached) 00:01:34.674 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:34.674 Library m found: YES 00:01:34.674 Library numa found: YES 00:01:34.674 Has header "numaif.h" : YES 00:01:34.674 Library fdt found: NO 00:01:34.674 Library execinfo found: NO 00:01:34.674 Has header "execinfo.h" : YES 00:01:34.674 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:34.674 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:34.674 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:34.674 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:34.674 Run-time dependency openssl found: YES 3.1.1 00:01:34.674 Run-time dependency libpcap found: YES 1.10.4 00:01:34.674 Has header "pcap.h" with dependency libpcap: YES 00:01:34.674 Compiler for C supports arguments -Wcast-qual: YES 00:01:34.674 Compiler for C supports arguments -Wdeprecated: YES 00:01:34.674 Compiler for C supports arguments -Wformat: YES 00:01:34.674 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:34.674 Compiler for C supports arguments -Wformat-security: NO 00:01:34.674 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.674 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:34.674 Compiler for C supports arguments -Wnested-externs: YES 00:01:34.674 Compiler for C supports arguments -Wold-style-definition: YES 00:01:34.674 Compiler for C supports arguments -Wpointer-arith: YES 00:01:34.674 Compiler for C supports arguments -Wsign-compare: YES 00:01:34.674 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:34.674 Compiler for C supports arguments -Wundef: YES 00:01:34.674 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.674 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:34.674 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:34.674 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.674 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:34.674 Program objdump found: YES (/usr/bin/objdump) 00:01:34.674 Compiler for C supports arguments -mavx512f: YES 00:01:34.674 Checking if "AVX512 checking" compiles: YES 00:01:34.674 Fetching value of define "__SSE4_2__" : 1 00:01:34.674 Fetching value of define "__AES__" : 1 00:01:34.674 Fetching value of define "__AVX__" : 1 00:01:34.674 Fetching value of define "__AVX2__" : 1 00:01:34.674 Fetching value of define "__AVX512BW__" : 1 00:01:34.674 Fetching value of define "__AVX512CD__" : 1 00:01:34.674 Fetching value of define "__AVX512DQ__" : 1 00:01:34.674 Fetching value of define "__AVX512F__" : 1 00:01:34.674 Fetching value of define "__AVX512VL__" : 1 00:01:34.674 Fetching value of define "__PCLMUL__" : 1 00:01:34.674 Fetching value of define "__RDRND__" : 1 00:01:34.674 Fetching value of define "__RDSEED__" : 1 00:01:34.674 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:34.674 Fetching value of define "__znver1__" : (undefined) 00:01:34.674 Fetching value of define "__znver2__" : (undefined) 00:01:34.674 Fetching value of define "__znver3__" : (undefined) 00:01:34.674 Fetching value of define "__znver4__" : (undefined) 00:01:34.674 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:34.674 Message: lib/log: Defining dependency "log" 00:01:34.674 Message: lib/kvargs: Defining dependency "kvargs" 00:01:34.674 Message: lib/telemetry: Defining dependency "telemetry" 00:01:34.674 Checking for function "getentropy" : NO 00:01:34.674 Message: lib/eal: Defining dependency "eal" 00:01:34.674 Message: lib/ring: Defining dependency "ring" 00:01:34.674 Message: lib/rcu: Defining dependency "rcu" 00:01:34.674 Message: lib/mempool: Defining dependency "mempool" 00:01:34.674 Message: lib/mbuf: Defining dependency "mbuf" 00:01:34.674 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:34.674 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:34.674 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:34.674 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:34.674 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:34.674 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:34.674 Compiler for C supports arguments -mpclmul: YES 00:01:34.674 Compiler for C supports arguments -maes: YES 00:01:34.674 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:34.674 Compiler for C supports arguments -mavx512bw: YES 00:01:34.674 Compiler for C supports arguments -mavx512dq: YES 00:01:34.674 Compiler for C supports arguments -mavx512vl: YES 00:01:34.674 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:34.674 Compiler for C supports arguments -mavx2: YES 00:01:34.674 Compiler for C supports arguments -mavx: YES 00:01:34.674 Message: lib/net: Defining dependency "net" 00:01:34.674 Message: lib/meter: Defining dependency "meter" 00:01:34.674 Message: lib/ethdev: Defining dependency "ethdev" 00:01:34.674 Message: lib/pci: Defining dependency "pci" 00:01:34.674 Message: lib/cmdline: Defining dependency "cmdline" 00:01:34.674 Message: lib/hash: Defining dependency "hash" 00:01:34.674 Message: lib/timer: Defining dependency "timer" 00:01:34.674 Message: lib/compressdev: Defining dependency "compressdev" 00:01:34.674 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:34.674 Message: lib/dmadev: Defining dependency "dmadev" 00:01:34.674 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:34.674 Message: lib/power: Defining dependency "power" 00:01:34.674 Message: lib/reorder: Defining dependency "reorder" 00:01:34.674 Message: lib/security: Defining dependency "security" 00:01:34.674 Has header "linux/userfaultfd.h" : YES 00:01:34.674 Has header "linux/vduse.h" : YES 00:01:34.674 Message: lib/vhost: Defining dependency "vhost" 00:01:34.674 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:34.674 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:34.674 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:34.674 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:34.674 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:34.674 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:34.674 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:34.674 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:34.674 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:34.674 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:34.674 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:34.674 Configuring doxy-api-html.conf using configuration 00:01:34.674 Configuring doxy-api-man.conf using configuration 00:01:34.674 Program mandb found: YES (/usr/bin/mandb) 00:01:34.674 Program sphinx-build found: NO 00:01:34.674 Configuring rte_build_config.h using configuration 00:01:34.674 Message: 00:01:34.674 ================= 00:01:34.674 Applications Enabled 00:01:34.674 ================= 00:01:34.674 00:01:34.674 apps: 00:01:34.674 00:01:34.674 00:01:34.674 Message: 00:01:34.674 ================= 00:01:34.675 Libraries Enabled 00:01:34.675 ================= 00:01:34.675 00:01:34.675 libs: 00:01:34.675 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:34.675 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:34.675 cryptodev, dmadev, power, reorder, security, vhost, 00:01:34.675 00:01:34.675 Message: 00:01:34.675 =============== 00:01:34.675 Drivers Enabled 00:01:34.675 =============== 00:01:34.675 00:01:34.675 common: 00:01:34.675 00:01:34.675 bus: 00:01:34.675 pci, vdev, 00:01:34.675 mempool: 00:01:34.675 ring, 00:01:34.675 dma: 00:01:34.675 00:01:34.675 net: 00:01:34.675 00:01:34.675 crypto: 00:01:34.675 00:01:34.675 compress: 00:01:34.675 00:01:34.675 vdpa: 00:01:34.675 00:01:34.675 00:01:34.675 Message: 00:01:34.675 ================= 00:01:34.675 Content Skipped 00:01:34.675 ================= 00:01:34.675 00:01:34.675 apps: 00:01:34.675 dumpcap: explicitly disabled via build config 00:01:34.675 graph: explicitly disabled via build config 00:01:34.675 pdump: explicitly disabled via build config 00:01:34.675 proc-info: explicitly disabled via build config 00:01:34.675 test-acl: explicitly disabled via build config 00:01:34.675 test-bbdev: explicitly disabled via build config 00:01:34.675 test-cmdline: explicitly disabled via build config 00:01:34.675 test-compress-perf: explicitly disabled via build config 00:01:34.675 test-crypto-perf: explicitly disabled via build config 00:01:34.675 test-dma-perf: explicitly disabled via build config 00:01:34.675 test-eventdev: explicitly disabled via build config 00:01:34.675 test-fib: explicitly disabled via build config 00:01:34.675 test-flow-perf: explicitly disabled via build config 00:01:34.675 test-gpudev: explicitly disabled via build config 00:01:34.675 test-mldev: explicitly disabled via build config 00:01:34.675 test-pipeline: explicitly disabled via build config 00:01:34.675 test-pmd: explicitly disabled via build config 00:01:34.675 test-regex: explicitly disabled via build config 00:01:34.675 test-sad: explicitly disabled via build config 00:01:34.675 test-security-perf: explicitly disabled via build config 00:01:34.675 00:01:34.675 libs: 00:01:34.675 argparse: explicitly disabled via build config 00:01:34.675 metrics: explicitly disabled via build config 00:01:34.675 acl: explicitly disabled via build config 00:01:34.675 bbdev: explicitly disabled via build config 00:01:34.675 bitratestats: explicitly disabled via build config 00:01:34.675 bpf: explicitly disabled via build config 00:01:34.675 cfgfile: explicitly disabled via build config 00:01:34.675 distributor: explicitly disabled via build config 00:01:34.675 efd: explicitly disabled via build config 00:01:34.675 eventdev: explicitly disabled via build config 00:01:34.675 dispatcher: explicitly disabled via build config 00:01:34.675 gpudev: explicitly disabled via build config 00:01:34.675 gro: explicitly disabled via build config 00:01:34.675 gso: explicitly disabled via build config 00:01:34.675 ip_frag: explicitly disabled via build config 00:01:34.675 jobstats: explicitly disabled via build config 00:01:34.675 latencystats: explicitly disabled via build config 00:01:34.675 lpm: explicitly disabled via build config 00:01:34.675 member: explicitly disabled via build config 00:01:34.675 pcapng: explicitly disabled via build config 00:01:34.675 rawdev: explicitly disabled via build config 00:01:34.675 regexdev: explicitly disabled via build config 00:01:34.675 mldev: explicitly disabled via build config 00:01:34.675 rib: explicitly disabled via build config 00:01:34.675 sched: explicitly disabled via build config 00:01:34.675 stack: explicitly disabled via build config 00:01:34.675 ipsec: explicitly disabled via build config 00:01:34.675 pdcp: explicitly disabled via build config 00:01:34.675 fib: explicitly disabled via build config 00:01:34.675 port: explicitly disabled via build config 00:01:34.675 pdump: explicitly disabled via build config 00:01:34.675 table: explicitly disabled via build config 00:01:34.675 pipeline: explicitly disabled via build config 00:01:34.675 graph: explicitly disabled via build config 00:01:34.675 node: explicitly disabled via build config 00:01:34.675 00:01:34.675 drivers: 00:01:34.675 common/cpt: not in enabled drivers build config 00:01:34.675 common/dpaax: not in enabled drivers build config 00:01:34.675 common/iavf: not in enabled drivers build config 00:01:34.675 common/idpf: not in enabled drivers build config 00:01:34.675 common/ionic: not in enabled drivers build config 00:01:34.675 common/mvep: not in enabled drivers build config 00:01:34.675 common/octeontx: not in enabled drivers build config 00:01:34.675 bus/auxiliary: not in enabled drivers build config 00:01:34.675 bus/cdx: not in enabled drivers build config 00:01:34.675 bus/dpaa: not in enabled drivers build config 00:01:34.675 bus/fslmc: not in enabled drivers build config 00:01:34.675 bus/ifpga: not in enabled drivers build config 00:01:34.675 bus/platform: not in enabled drivers build config 00:01:34.675 bus/uacce: not in enabled drivers build config 00:01:34.675 bus/vmbus: not in enabled drivers build config 00:01:34.675 common/cnxk: not in enabled drivers build config 00:01:34.675 common/mlx5: not in enabled drivers build config 00:01:34.675 common/nfp: not in enabled drivers build config 00:01:34.675 common/nitrox: not in enabled drivers build config 00:01:34.675 common/qat: not in enabled drivers build config 00:01:34.675 common/sfc_efx: not in enabled drivers build config 00:01:34.675 mempool/bucket: not in enabled drivers build config 00:01:34.675 mempool/cnxk: not in enabled drivers build config 00:01:34.675 mempool/dpaa: not in enabled drivers build config 00:01:34.675 mempool/dpaa2: not in enabled drivers build config 00:01:34.675 mempool/octeontx: not in enabled drivers build config 00:01:34.675 mempool/stack: not in enabled drivers build config 00:01:34.675 dma/cnxk: not in enabled drivers build config 00:01:34.675 dma/dpaa: not in enabled drivers build config 00:01:34.675 dma/dpaa2: not in enabled drivers build config 00:01:34.675 dma/hisilicon: not in enabled drivers build config 00:01:34.675 dma/idxd: not in enabled drivers build config 00:01:34.675 dma/ioat: not in enabled drivers build config 00:01:34.675 dma/skeleton: not in enabled drivers build config 00:01:34.675 net/af_packet: not in enabled drivers build config 00:01:34.675 net/af_xdp: not in enabled drivers build config 00:01:34.675 net/ark: not in enabled drivers build config 00:01:34.675 net/atlantic: not in enabled drivers build config 00:01:34.675 net/avp: not in enabled drivers build config 00:01:34.675 net/axgbe: not in enabled drivers build config 00:01:34.675 net/bnx2x: not in enabled drivers build config 00:01:34.675 net/bnxt: not in enabled drivers build config 00:01:34.675 net/bonding: not in enabled drivers build config 00:01:34.675 net/cnxk: not in enabled drivers build config 00:01:34.675 net/cpfl: not in enabled drivers build config 00:01:34.675 net/cxgbe: not in enabled drivers build config 00:01:34.675 net/dpaa: not in enabled drivers build config 00:01:34.675 net/dpaa2: not in enabled drivers build config 00:01:34.676 net/e1000: not in enabled drivers build config 00:01:34.676 net/ena: not in enabled drivers build config 00:01:34.676 net/enetc: not in enabled drivers build config 00:01:34.676 net/enetfec: not in enabled drivers build config 00:01:34.676 net/enic: not in enabled drivers build config 00:01:34.676 net/failsafe: not in enabled drivers build config 00:01:34.676 net/fm10k: not in enabled drivers build config 00:01:34.676 net/gve: not in enabled drivers build config 00:01:34.676 net/hinic: not in enabled drivers build config 00:01:34.676 net/hns3: not in enabled drivers build config 00:01:34.676 net/i40e: not in enabled drivers build config 00:01:34.676 net/iavf: not in enabled drivers build config 00:01:34.676 net/ice: not in enabled drivers build config 00:01:34.676 net/idpf: not in enabled drivers build config 00:01:34.676 net/igc: not in enabled drivers build config 00:01:34.676 net/ionic: not in enabled drivers build config 00:01:34.676 net/ipn3ke: not in enabled drivers build config 00:01:34.676 net/ixgbe: not in enabled drivers build config 00:01:34.676 net/mana: not in enabled drivers build config 00:01:34.676 net/memif: not in enabled drivers build config 00:01:34.676 net/mlx4: not in enabled drivers build config 00:01:34.676 net/mlx5: not in enabled drivers build config 00:01:34.676 net/mvneta: not in enabled drivers build config 00:01:34.676 net/mvpp2: not in enabled drivers build config 00:01:34.676 net/netvsc: not in enabled drivers build config 00:01:34.676 net/nfb: not in enabled drivers build config 00:01:34.676 net/nfp: not in enabled drivers build config 00:01:34.676 net/ngbe: not in enabled drivers build config 00:01:34.676 net/null: not in enabled drivers build config 00:01:34.676 net/octeontx: not in enabled drivers build config 00:01:34.676 net/octeon_ep: not in enabled drivers build config 00:01:34.676 net/pcap: not in enabled drivers build config 00:01:34.676 net/pfe: not in enabled drivers build config 00:01:34.676 net/qede: not in enabled drivers build config 00:01:34.676 net/ring: not in enabled drivers build config 00:01:34.676 net/sfc: not in enabled drivers build config 00:01:34.676 net/softnic: not in enabled drivers build config 00:01:34.676 net/tap: not in enabled drivers build config 00:01:34.676 net/thunderx: not in enabled drivers build config 00:01:34.676 net/txgbe: not in enabled drivers build config 00:01:34.676 net/vdev_netvsc: not in enabled drivers build config 00:01:34.676 net/vhost: not in enabled drivers build config 00:01:34.676 net/virtio: not in enabled drivers build config 00:01:34.676 net/vmxnet3: not in enabled drivers build config 00:01:34.676 raw/*: missing internal dependency, "rawdev" 00:01:34.676 crypto/armv8: not in enabled drivers build config 00:01:34.676 crypto/bcmfs: not in enabled drivers build config 00:01:34.676 crypto/caam_jr: not in enabled drivers build config 00:01:34.676 crypto/ccp: not in enabled drivers build config 00:01:34.676 crypto/cnxk: not in enabled drivers build config 00:01:34.676 crypto/dpaa_sec: not in enabled drivers build config 00:01:34.676 crypto/dpaa2_sec: not in enabled drivers build config 00:01:34.676 crypto/ipsec_mb: not in enabled drivers build config 00:01:34.676 crypto/mlx5: not in enabled drivers build config 00:01:34.676 crypto/mvsam: not in enabled drivers build config 00:01:34.676 crypto/nitrox: not in enabled drivers build config 00:01:34.676 crypto/null: not in enabled drivers build config 00:01:34.676 crypto/octeontx: not in enabled drivers build config 00:01:34.676 crypto/openssl: not in enabled drivers build config 00:01:34.676 crypto/scheduler: not in enabled drivers build config 00:01:34.676 crypto/uadk: not in enabled drivers build config 00:01:34.676 crypto/virtio: not in enabled drivers build config 00:01:34.676 compress/isal: not in enabled drivers build config 00:01:34.676 compress/mlx5: not in enabled drivers build config 00:01:34.676 compress/nitrox: not in enabled drivers build config 00:01:34.676 compress/octeontx: not in enabled drivers build config 00:01:34.676 compress/zlib: not in enabled drivers build config 00:01:34.676 regex/*: missing internal dependency, "regexdev" 00:01:34.676 ml/*: missing internal dependency, "mldev" 00:01:34.676 vdpa/ifc: not in enabled drivers build config 00:01:34.676 vdpa/mlx5: not in enabled drivers build config 00:01:34.676 vdpa/nfp: not in enabled drivers build config 00:01:34.676 vdpa/sfc: not in enabled drivers build config 00:01:34.676 event/*: missing internal dependency, "eventdev" 00:01:34.676 baseband/*: missing internal dependency, "bbdev" 00:01:34.676 gpu/*: missing internal dependency, "gpudev" 00:01:34.676 00:01:34.676 00:01:34.676 Build targets in project: 85 00:01:34.676 00:01:34.676 DPDK 24.03.0 00:01:34.676 00:01:34.676 User defined options 00:01:34.676 buildtype : debug 00:01:34.676 default_library : shared 00:01:34.676 libdir : lib 00:01:34.676 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:34.676 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:34.676 c_link_args : 00:01:34.676 cpu_instruction_set: native 00:01:34.676 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:34.676 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:34.676 enable_docs : false 00:01:34.676 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:34.676 enable_kmods : false 00:01:34.676 max_lcores : 128 00:01:34.676 tests : false 00:01:34.676 00:01:34.676 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:34.676 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:34.676 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:34.676 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:34.676 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:34.676 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:34.676 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:34.676 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:34.676 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:34.676 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:34.676 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:34.676 [10/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:34.676 [11/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:34.676 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:34.676 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:34.676 [14/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:34.676 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:34.676 [16/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:34.676 [17/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:34.676 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:34.676 [19/268] Linking static target lib/librte_pci.a 00:01:34.676 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:34.676 [21/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:34.676 [22/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:34.676 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:34.676 [24/268] Linking static target lib/librte_kvargs.a 00:01:34.676 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:34.676 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:34.677 [27/268] Linking static target lib/librte_log.a 00:01:34.677 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:34.677 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:34.677 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:34.677 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:34.677 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:34.677 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:34.677 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:34.677 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:34.677 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:34.677 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:34.677 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:34.677 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:34.677 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:34.677 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:34.677 [42/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:34.677 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:34.677 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:34.677 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:34.677 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:34.677 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:34.677 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:34.677 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:34.677 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:34.677 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:34.677 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:34.677 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:34.677 [54/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:34.677 [55/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:34.677 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:34.677 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:34.677 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:34.677 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:34.677 [60/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:34.677 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:34.677 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:34.677 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:34.677 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:34.677 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:34.677 [66/268] Linking static target lib/librte_meter.a 00:01:34.677 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:34.677 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:34.677 [69/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:34.677 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:34.677 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:34.677 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:34.677 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:34.677 [74/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:34.936 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:34.936 [76/268] Linking static target lib/librte_ring.a 00:01:34.936 [77/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:34.936 [78/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:34.936 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:34.936 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:34.936 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:34.936 [82/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:34.936 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:34.936 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:34.936 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:34.936 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:34.936 [87/268] Linking static target lib/librte_telemetry.a 00:01:34.936 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:34.936 [89/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:34.936 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:34.936 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:34.936 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:34.936 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:34.936 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:34.936 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:34.936 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:34.936 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:34.936 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:34.936 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:34.936 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:34.936 [101/268] Linking static target lib/librte_cmdline.a 00:01:34.936 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:34.936 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:34.936 [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:34.936 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:34.936 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:34.936 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:34.936 [108/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:34.936 [109/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.936 [110/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:34.936 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:34.936 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:34.936 [113/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.936 [114/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:34.936 [115/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:34.936 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:34.936 [117/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:34.936 [118/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:34.936 [119/268] Linking static target lib/librte_timer.a 00:01:34.936 [120/268] Linking static target lib/librte_rcu.a 00:01:34.936 [121/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:34.936 [122/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:34.936 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:34.936 [124/268] Linking static target lib/librte_net.a 00:01:34.936 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:34.936 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:34.936 [127/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:34.936 [128/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:34.936 [129/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:34.936 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:34.936 [131/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:34.936 [132/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:34.936 [133/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:34.936 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:34.936 [135/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:34.936 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:34.936 [137/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:34.936 [138/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:34.936 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:34.936 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:34.936 [141/268] Linking static target lib/librte_mempool.a 00:01:34.936 [142/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:34.936 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:34.936 [144/268] Linking static target lib/librte_eal.a 00:01:34.936 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:34.936 [146/268] Linking static target lib/librte_dmadev.a 00:01:34.936 [147/268] Linking static target lib/librte_compressdev.a 00:01:34.936 [148/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:34.936 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:34.936 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:34.936 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:34.936 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:34.936 [153/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.936 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:35.195 [155/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:35.195 [156/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.195 [157/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.195 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:35.195 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:35.195 [160/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:35.195 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:35.195 [162/268] Linking static target lib/librte_mbuf.a 00:01:35.195 [163/268] Linking target lib/librte_log.so.24.1 00:01:35.195 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:35.195 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:35.195 [166/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:35.195 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:35.195 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:35.195 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:35.195 [170/268] Linking static target lib/librte_power.a 00:01:35.195 [171/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.195 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:35.195 [173/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.195 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:35.195 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:35.195 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:35.195 [177/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:35.195 [178/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:35.195 [179/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:35.195 [180/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.195 [181/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:35.195 [182/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:35.195 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:35.195 [184/268] Linking static target lib/librte_security.a 00:01:35.195 [185/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:35.195 [186/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.195 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:35.195 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:35.195 [189/268] Linking target lib/librte_kvargs.so.24.1 00:01:35.195 [190/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:35.195 [191/268] Linking target lib/librte_telemetry.so.24.1 00:01:35.195 [192/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:35.195 [193/268] Linking static target lib/librte_reorder.a 00:01:35.195 [194/268] Linking static target lib/librte_hash.a 00:01:35.454 [195/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:35.454 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:35.454 [197/268] Linking static target lib/librte_cryptodev.a 00:01:35.454 [198/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:35.454 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:35.454 [200/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:35.454 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:35.454 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:35.454 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:35.454 [204/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:35.454 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:35.454 [206/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.454 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:35.454 [208/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:35.454 [209/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.454 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:35.454 [211/268] Linking static target drivers/librte_bus_pci.a 00:01:35.454 [212/268] Linking static target drivers/librte_bus_vdev.a 00:01:35.454 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:35.713 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.713 [215/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.713 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.713 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:35.970 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.970 [219/268] Linking static target lib/librte_ethdev.a 00:01:35.970 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.971 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.971 [222/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.971 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.971 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:35.971 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.229 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.229 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.794 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:36.794 [229/268] Linking static target lib/librte_vhost.a 00:01:37.360 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.734 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.994 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.927 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.927 [234/268] Linking target lib/librte_eal.so.24.1 00:01:44.927 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:44.927 [236/268] Linking target lib/librte_ring.so.24.1 00:01:44.927 [237/268] Linking target lib/librte_pci.so.24.1 00:01:44.927 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:44.927 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:44.927 [240/268] Linking target lib/librte_timer.so.24.1 00:01:44.927 [241/268] Linking target lib/librte_meter.so.24.1 00:01:45.184 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:45.184 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:45.184 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:45.184 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:45.184 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:45.184 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:45.184 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:45.184 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:45.184 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:45.184 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:45.184 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:45.441 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:45.441 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:45.441 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:45.441 [256/268] Linking target lib/librte_net.so.24.1 00:01:45.441 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:45.441 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:45.698 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:45.698 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:45.698 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:45.698 [262/268] Linking target lib/librte_hash.so.24.1 00:01:45.699 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:45.699 [264/268] Linking target lib/librte_security.so.24.1 00:01:45.699 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:45.699 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:45.699 [267/268] Linking target lib/librte_power.so.24.1 00:01:45.699 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:45.956 INFO: autodetecting backend as ninja 00:01:45.956 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:55.918 CC lib/log/log.o 00:01:55.918 CC lib/log/log_flags.o 00:01:55.918 CC lib/log/log_deprecated.o 00:01:55.918 CC lib/ut/ut.o 00:01:55.918 CC lib/ut_mock/mock.o 00:01:55.918 LIB libspdk_log.a 00:01:55.918 LIB libspdk_ut.a 00:01:55.918 LIB libspdk_ut_mock.a 00:01:55.918 SO libspdk_log.so.7.1 00:01:55.918 SO libspdk_ut.so.2.0 00:01:55.918 SO libspdk_ut_mock.so.6.0 00:01:55.918 SYMLINK libspdk_log.so 00:01:55.918 SYMLINK libspdk_ut.so 00:01:55.918 SYMLINK libspdk_ut_mock.so 00:01:55.918 CC lib/dma/dma.o 00:01:55.918 CC lib/util/base64.o 00:01:55.918 CC lib/util/crc16.o 00:01:55.918 CC lib/util/bit_array.o 00:01:55.918 CC lib/util/cpuset.o 00:01:55.918 CC lib/util/crc32_ieee.o 00:01:55.918 CC lib/util/crc32.o 00:01:55.918 CC lib/util/crc32c.o 00:01:55.918 CC lib/util/dif.o 00:01:55.918 CC lib/util/fd.o 00:01:55.918 CC lib/util/crc64.o 00:01:55.918 CC lib/util/fd_group.o 00:01:55.918 CC lib/util/file.o 00:01:55.918 CC lib/util/hexlify.o 00:01:55.918 CC lib/util/iov.o 00:01:55.918 CC lib/util/math.o 00:01:55.918 CC lib/util/net.o 00:01:55.918 CC lib/util/pipe.o 00:01:55.918 CC lib/util/strerror_tls.o 00:01:55.918 CC lib/util/uuid.o 00:01:55.918 CC lib/util/string.o 00:01:55.918 CC lib/util/xor.o 00:01:55.918 CC lib/util/zipf.o 00:01:55.918 CC lib/util/md5.o 00:01:55.918 CC lib/ioat/ioat.o 00:01:55.918 CXX lib/trace_parser/trace.o 00:01:55.918 LIB libspdk_dma.a 00:01:55.918 SO libspdk_dma.so.5.0 00:01:55.918 CC lib/vfio_user/host/vfio_user_pci.o 00:01:55.918 CC lib/vfio_user/host/vfio_user.o 00:01:55.918 SYMLINK libspdk_dma.so 00:01:55.918 LIB libspdk_ioat.a 00:01:55.918 SO libspdk_ioat.so.7.0 00:01:55.918 SYMLINK libspdk_ioat.so 00:01:55.918 LIB libspdk_vfio_user.a 00:01:55.918 LIB libspdk_util.a 00:01:55.918 SO libspdk_vfio_user.so.5.0 00:01:55.918 SO libspdk_util.so.10.1 00:01:55.918 SYMLINK libspdk_vfio_user.so 00:01:55.918 SYMLINK libspdk_util.so 00:01:55.918 LIB libspdk_trace_parser.a 00:01:55.918 SO libspdk_trace_parser.so.6.0 00:01:55.918 SYMLINK libspdk_trace_parser.so 00:01:55.918 CC lib/env_dpdk/env.o 00:01:55.918 CC lib/env_dpdk/init.o 00:01:55.918 CC lib/env_dpdk/memory.o 00:01:55.918 CC lib/env_dpdk/pci.o 00:01:55.918 CC lib/env_dpdk/threads.o 00:01:55.918 CC lib/env_dpdk/pci_ioat.o 00:01:55.918 CC lib/env_dpdk/pci_virtio.o 00:01:55.918 CC lib/env_dpdk/pci_vmd.o 00:01:55.918 CC lib/env_dpdk/pci_idxd.o 00:01:55.918 CC lib/env_dpdk/pci_event.o 00:01:55.918 CC lib/env_dpdk/sigbus_handler.o 00:01:55.918 CC lib/env_dpdk/pci_dpdk.o 00:01:55.918 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:55.918 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.918 CC lib/vmd/led.o 00:01:55.918 CC lib/vmd/vmd.o 00:01:55.918 CC lib/conf/conf.o 00:01:55.918 CC lib/rdma_utils/rdma_utils.o 00:01:55.918 CC lib/json/json_parse.o 00:01:55.918 CC lib/json/json_util.o 00:01:55.918 CC lib/json/json_write.o 00:01:55.918 CC lib/idxd/idxd.o 00:01:55.918 CC lib/idxd/idxd_user.o 00:01:55.918 CC lib/idxd/idxd_kernel.o 00:01:55.918 LIB libspdk_conf.a 00:01:56.177 LIB libspdk_json.a 00:01:56.177 LIB libspdk_rdma_utils.a 00:01:56.177 SO libspdk_conf.so.6.0 00:01:56.177 SO libspdk_rdma_utils.so.1.0 00:01:56.177 SO libspdk_json.so.6.0 00:01:56.177 SYMLINK libspdk_conf.so 00:01:56.177 SYMLINK libspdk_json.so 00:01:56.177 SYMLINK libspdk_rdma_utils.so 00:01:56.177 LIB libspdk_idxd.a 00:01:56.177 LIB libspdk_vmd.a 00:01:56.177 SO libspdk_idxd.so.12.1 00:01:56.177 SO libspdk_vmd.so.6.0 00:01:56.435 SYMLINK libspdk_idxd.so 00:01:56.435 SYMLINK libspdk_vmd.so 00:01:56.435 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.435 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.435 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.435 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.435 CC lib/rdma_provider/common.o 00:01:56.435 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:56.692 LIB libspdk_jsonrpc.a 00:01:56.692 LIB libspdk_rdma_provider.a 00:01:56.692 SO libspdk_jsonrpc.so.6.0 00:01:56.692 SO libspdk_rdma_provider.so.7.0 00:01:56.692 SYMLINK libspdk_rdma_provider.so 00:01:56.692 SYMLINK libspdk_jsonrpc.so 00:01:56.692 LIB libspdk_env_dpdk.a 00:01:56.692 SO libspdk_env_dpdk.so.15.1 00:01:56.950 SYMLINK libspdk_env_dpdk.so 00:01:56.950 CC lib/rpc/rpc.o 00:01:57.209 LIB libspdk_rpc.a 00:01:57.209 SO libspdk_rpc.so.6.0 00:01:57.209 SYMLINK libspdk_rpc.so 00:01:57.467 CC lib/keyring/keyring.o 00:01:57.467 CC lib/keyring/keyring_rpc.o 00:01:57.467 CC lib/notify/notify.o 00:01:57.467 CC lib/notify/notify_rpc.o 00:01:57.467 CC lib/trace/trace.o 00:01:57.467 CC lib/trace/trace_flags.o 00:01:57.467 CC lib/trace/trace_rpc.o 00:01:57.725 LIB libspdk_notify.a 00:01:57.725 LIB libspdk_keyring.a 00:01:57.725 SO libspdk_notify.so.6.0 00:01:57.725 LIB libspdk_trace.a 00:01:57.725 SO libspdk_keyring.so.2.0 00:01:57.725 SO libspdk_trace.so.11.0 00:01:57.725 SYMLINK libspdk_notify.so 00:01:57.725 SYMLINK libspdk_keyring.so 00:01:57.725 SYMLINK libspdk_trace.so 00:01:58.291 CC lib/sock/sock_rpc.o 00:01:58.291 CC lib/thread/thread.o 00:01:58.291 CC lib/sock/sock.o 00:01:58.291 CC lib/thread/iobuf.o 00:01:58.550 LIB libspdk_sock.a 00:01:58.550 SO libspdk_sock.so.10.0 00:01:58.550 SYMLINK libspdk_sock.so 00:01:58.855 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:58.855 CC lib/nvme/nvme_ctrlr.o 00:01:58.855 CC lib/nvme/nvme_fabric.o 00:01:58.855 CC lib/nvme/nvme_ns_cmd.o 00:01:58.855 CC lib/nvme/nvme_ns.o 00:01:58.855 CC lib/nvme/nvme_pcie_common.o 00:01:58.855 CC lib/nvme/nvme_pcie.o 00:01:58.855 CC lib/nvme/nvme_qpair.o 00:01:58.855 CC lib/nvme/nvme.o 00:01:58.855 CC lib/nvme/nvme_quirks.o 00:01:58.855 CC lib/nvme/nvme_transport.o 00:01:58.855 CC lib/nvme/nvme_tcp.o 00:01:58.855 CC lib/nvme/nvme_discovery.o 00:01:58.855 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:58.855 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:58.855 CC lib/nvme/nvme_opal.o 00:01:58.855 CC lib/nvme/nvme_io_msg.o 00:01:58.855 CC lib/nvme/nvme_poll_group.o 00:01:58.855 CC lib/nvme/nvme_zns.o 00:01:58.855 CC lib/nvme/nvme_stubs.o 00:01:58.855 CC lib/nvme/nvme_auth.o 00:01:58.855 CC lib/nvme/nvme_cuse.o 00:01:58.855 CC lib/nvme/nvme_rdma.o 00:01:59.113 LIB libspdk_thread.a 00:01:59.113 SO libspdk_thread.so.11.0 00:01:59.113 SYMLINK libspdk_thread.so 00:01:59.371 CC lib/accel/accel_sw.o 00:01:59.371 CC lib/accel/accel.o 00:01:59.371 CC lib/accel/accel_rpc.o 00:01:59.371 CC lib/blob/request.o 00:01:59.371 CC lib/blob/blobstore.o 00:01:59.371 CC lib/blob/zeroes.o 00:01:59.371 CC lib/blob/blob_bs_dev.o 00:01:59.371 CC lib/init/json_config.o 00:01:59.371 CC lib/init/subsystem.o 00:01:59.371 CC lib/virtio/virtio.o 00:01:59.371 CC lib/init/rpc.o 00:01:59.371 CC lib/init/subsystem_rpc.o 00:01:59.371 CC lib/virtio/virtio_vhost_user.o 00:01:59.371 CC lib/virtio/virtio_vfio_user.o 00:01:59.371 CC lib/virtio/virtio_pci.o 00:01:59.371 CC lib/fsdev/fsdev.o 00:01:59.371 CC lib/fsdev/fsdev_io.o 00:01:59.371 CC lib/fsdev/fsdev_rpc.o 00:01:59.628 LIB libspdk_init.a 00:01:59.628 SO libspdk_init.so.6.0 00:01:59.628 LIB libspdk_virtio.a 00:01:59.628 SO libspdk_virtio.so.7.0 00:01:59.906 SYMLINK libspdk_init.so 00:01:59.906 SYMLINK libspdk_virtio.so 00:01:59.906 LIB libspdk_fsdev.a 00:01:59.906 SO libspdk_fsdev.so.2.0 00:01:59.906 SYMLINK libspdk_fsdev.so 00:02:00.165 CC lib/event/log_rpc.o 00:02:00.165 CC lib/event/app.o 00:02:00.165 CC lib/event/reactor.o 00:02:00.165 CC lib/event/scheduler_static.o 00:02:00.165 CC lib/event/app_rpc.o 00:02:00.165 LIB libspdk_accel.a 00:02:00.165 SO libspdk_accel.so.16.0 00:02:00.165 LIB libspdk_nvme.a 00:02:00.165 SYMLINK libspdk_accel.so 00:02:00.423 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:00.423 SO libspdk_nvme.so.15.0 00:02:00.423 LIB libspdk_event.a 00:02:00.423 SO libspdk_event.so.14.0 00:02:00.423 SYMLINK libspdk_event.so 00:02:00.423 SYMLINK libspdk_nvme.so 00:02:00.423 CC lib/bdev/bdev.o 00:02:00.679 CC lib/bdev/part.o 00:02:00.679 CC lib/bdev/bdev_rpc.o 00:02:00.679 CC lib/bdev/bdev_zone.o 00:02:00.679 CC lib/bdev/scsi_nvme.o 00:02:00.679 LIB libspdk_fuse_dispatcher.a 00:02:00.679 SO libspdk_fuse_dispatcher.so.1.0 00:02:00.937 SYMLINK libspdk_fuse_dispatcher.so 00:02:01.504 LIB libspdk_blob.a 00:02:01.504 SO libspdk_blob.so.12.0 00:02:01.504 SYMLINK libspdk_blob.so 00:02:01.762 CC lib/lvol/lvol.o 00:02:01.762 CC lib/blobfs/blobfs.o 00:02:01.762 CC lib/blobfs/tree.o 00:02:02.327 LIB libspdk_bdev.a 00:02:02.327 SO libspdk_bdev.so.17.0 00:02:02.327 SYMLINK libspdk_bdev.so 00:02:02.327 LIB libspdk_blobfs.a 00:02:02.328 LIB libspdk_lvol.a 00:02:02.328 SO libspdk_blobfs.so.11.0 00:02:02.328 SO libspdk_lvol.so.11.0 00:02:02.587 SYMLINK libspdk_lvol.so 00:02:02.587 SYMLINK libspdk_blobfs.so 00:02:02.587 CC lib/nbd/nbd.o 00:02:02.587 CC lib/nbd/nbd_rpc.o 00:02:02.587 CC lib/nvmf/ctrlr_discovery.o 00:02:02.587 CC lib/nvmf/ctrlr_bdev.o 00:02:02.587 CC lib/nvmf/ctrlr.o 00:02:02.587 CC lib/nvmf/subsystem.o 00:02:02.587 CC lib/nvmf/nvmf.o 00:02:02.587 CC lib/nvmf/nvmf_rpc.o 00:02:02.587 CC lib/nvmf/tcp.o 00:02:02.587 CC lib/nvmf/transport.o 00:02:02.587 CC lib/nvmf/stubs.o 00:02:02.587 CC lib/nvmf/rdma.o 00:02:02.587 CC lib/nvmf/mdns_server.o 00:02:02.587 CC lib/nvmf/auth.o 00:02:02.587 CC lib/ublk/ublk.o 00:02:02.587 CC lib/ftl/ftl_core.o 00:02:02.587 CC lib/scsi/dev.o 00:02:02.587 CC lib/ublk/ublk_rpc.o 00:02:02.587 CC lib/scsi/lun.o 00:02:02.587 CC lib/ftl/ftl_init.o 00:02:02.587 CC lib/scsi/port.o 00:02:02.587 CC lib/ftl/ftl_layout.o 00:02:02.587 CC lib/scsi/scsi.o 00:02:02.587 CC lib/ftl/ftl_debug.o 00:02:02.587 CC lib/ftl/ftl_io.o 00:02:02.587 CC lib/scsi/scsi_bdev.o 00:02:02.587 CC lib/ftl/ftl_l2p.o 00:02:02.587 CC lib/scsi/scsi_pr.o 00:02:02.587 CC lib/ftl/ftl_sb.o 00:02:02.587 CC lib/scsi/scsi_rpc.o 00:02:02.587 CC lib/scsi/task.o 00:02:02.587 CC lib/ftl/ftl_l2p_flat.o 00:02:02.587 CC lib/ftl/ftl_nv_cache.o 00:02:02.587 CC lib/ftl/ftl_band.o 00:02:02.587 CC lib/ftl/ftl_band_ops.o 00:02:02.587 CC lib/ftl/ftl_rq.o 00:02:02.587 CC lib/ftl/ftl_writer.o 00:02:02.587 CC lib/ftl/ftl_reloc.o 00:02:02.587 CC lib/ftl/ftl_l2p_cache.o 00:02:02.587 CC lib/ftl/ftl_p2l.o 00:02:02.587 CC lib/ftl/ftl_p2l_log.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:02.587 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:02.587 CC lib/ftl/utils/ftl_conf.o 00:02:02.587 CC lib/ftl/utils/ftl_md.o 00:02:02.587 CC lib/ftl/utils/ftl_mempool.o 00:02:02.587 CC lib/ftl/utils/ftl_bitmap.o 00:02:02.587 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:02.587 CC lib/ftl/utils/ftl_property.o 00:02:02.587 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:02.587 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:02.587 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:02.587 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:02.587 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:02.587 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:02.587 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:02.587 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:02.587 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:02.846 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:02.846 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:02.846 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:02.846 CC lib/ftl/base/ftl_base_dev.o 00:02:02.846 CC lib/ftl/base/ftl_base_bdev.o 00:02:02.846 CC lib/ftl/ftl_trace.o 00:02:03.103 LIB libspdk_nbd.a 00:02:03.103 SO libspdk_nbd.so.7.0 00:02:03.103 SYMLINK libspdk_nbd.so 00:02:03.103 LIB libspdk_scsi.a 00:02:03.360 LIB libspdk_ublk.a 00:02:03.360 SO libspdk_scsi.so.9.0 00:02:03.360 SO libspdk_ublk.so.3.0 00:02:03.360 SYMLINK libspdk_scsi.so 00:02:03.360 SYMLINK libspdk_ublk.so 00:02:03.360 LIB libspdk_ftl.a 00:02:03.618 CC lib/iscsi/conn.o 00:02:03.618 CC lib/iscsi/init_grp.o 00:02:03.618 CC lib/iscsi/iscsi.o 00:02:03.618 CC lib/iscsi/param.o 00:02:03.618 CC lib/iscsi/portal_grp.o 00:02:03.618 CC lib/iscsi/tgt_node.o 00:02:03.618 CC lib/iscsi/iscsi_subsystem.o 00:02:03.618 CC lib/iscsi/iscsi_rpc.o 00:02:03.618 CC lib/iscsi/task.o 00:02:03.618 SO libspdk_ftl.so.9.0 00:02:03.618 CC lib/vhost/vhost.o 00:02:03.618 CC lib/vhost/vhost_rpc.o 00:02:03.618 CC lib/vhost/vhost_scsi.o 00:02:03.618 CC lib/vhost/vhost_blk.o 00:02:03.618 CC lib/vhost/rte_vhost_user.o 00:02:03.876 SYMLINK libspdk_ftl.so 00:02:04.134 LIB libspdk_nvmf.a 00:02:04.134 SO libspdk_nvmf.so.20.0 00:02:04.391 LIB libspdk_vhost.a 00:02:04.391 SO libspdk_vhost.so.8.0 00:02:04.391 SYMLINK libspdk_nvmf.so 00:02:04.391 SYMLINK libspdk_vhost.so 00:02:04.391 LIB libspdk_iscsi.a 00:02:04.391 SO libspdk_iscsi.so.8.0 00:02:04.650 SYMLINK libspdk_iscsi.so 00:02:05.216 CC module/env_dpdk/env_dpdk_rpc.o 00:02:05.216 LIB libspdk_env_dpdk_rpc.a 00:02:05.216 CC module/keyring/file/keyring.o 00:02:05.216 CC module/keyring/file/keyring_rpc.o 00:02:05.216 CC module/accel/dsa/accel_dsa.o 00:02:05.216 CC module/accel/dsa/accel_dsa_rpc.o 00:02:05.216 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:05.216 CC module/accel/error/accel_error.o 00:02:05.216 CC module/accel/error/accel_error_rpc.o 00:02:05.216 CC module/accel/ioat/accel_ioat.o 00:02:05.216 CC module/blob/bdev/blob_bdev.o 00:02:05.216 CC module/accel/ioat/accel_ioat_rpc.o 00:02:05.216 CC module/sock/posix/posix.o 00:02:05.216 CC module/keyring/linux/keyring.o 00:02:05.216 CC module/keyring/linux/keyring_rpc.o 00:02:05.216 SO libspdk_env_dpdk_rpc.so.6.0 00:02:05.216 CC module/accel/iaa/accel_iaa.o 00:02:05.216 CC module/accel/iaa/accel_iaa_rpc.o 00:02:05.216 CC module/scheduler/gscheduler/gscheduler.o 00:02:05.216 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:05.216 CC module/fsdev/aio/fsdev_aio.o 00:02:05.216 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:05.216 CC module/fsdev/aio/linux_aio_mgr.o 00:02:05.216 SYMLINK libspdk_env_dpdk_rpc.so 00:02:05.216 LIB libspdk_keyring_file.a 00:02:05.475 SO libspdk_keyring_file.so.2.0 00:02:05.475 LIB libspdk_keyring_linux.a 00:02:05.475 LIB libspdk_scheduler_dpdk_governor.a 00:02:05.475 LIB libspdk_scheduler_gscheduler.a 00:02:05.475 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:05.475 SO libspdk_keyring_linux.so.1.0 00:02:05.475 LIB libspdk_accel_ioat.a 00:02:05.475 SO libspdk_scheduler_gscheduler.so.4.0 00:02:05.475 LIB libspdk_accel_error.a 00:02:05.475 SYMLINK libspdk_keyring_file.so 00:02:05.475 LIB libspdk_accel_iaa.a 00:02:05.475 SO libspdk_accel_ioat.so.6.0 00:02:05.475 LIB libspdk_scheduler_dynamic.a 00:02:05.475 LIB libspdk_accel_dsa.a 00:02:05.475 SO libspdk_accel_iaa.so.3.0 00:02:05.475 SO libspdk_accel_error.so.2.0 00:02:05.475 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:05.475 LIB libspdk_blob_bdev.a 00:02:05.475 SYMLINK libspdk_keyring_linux.so 00:02:05.475 SYMLINK libspdk_scheduler_gscheduler.so 00:02:05.475 SO libspdk_scheduler_dynamic.so.4.0 00:02:05.475 SO libspdk_accel_dsa.so.5.0 00:02:05.475 SO libspdk_blob_bdev.so.12.0 00:02:05.475 SYMLINK libspdk_accel_ioat.so 00:02:05.475 SYMLINK libspdk_accel_iaa.so 00:02:05.475 SYMLINK libspdk_accel_error.so 00:02:05.475 SYMLINK libspdk_scheduler_dynamic.so 00:02:05.475 SYMLINK libspdk_accel_dsa.so 00:02:05.475 SYMLINK libspdk_blob_bdev.so 00:02:05.736 LIB libspdk_fsdev_aio.a 00:02:05.737 SO libspdk_fsdev_aio.so.1.0 00:02:05.737 LIB libspdk_sock_posix.a 00:02:05.737 SO libspdk_sock_posix.so.6.0 00:02:05.737 SYMLINK libspdk_fsdev_aio.so 00:02:05.993 SYMLINK libspdk_sock_posix.so 00:02:05.993 CC module/bdev/split/vbdev_split.o 00:02:05.993 CC module/bdev/split/vbdev_split_rpc.o 00:02:05.993 CC module/blobfs/bdev/blobfs_bdev.o 00:02:05.993 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:05.993 CC module/bdev/ftl/bdev_ftl.o 00:02:05.993 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:05.993 CC module/bdev/raid/bdev_raid.o 00:02:05.993 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:05.993 CC module/bdev/raid/bdev_raid_rpc.o 00:02:05.993 CC module/bdev/passthru/vbdev_passthru.o 00:02:05.993 CC module/bdev/lvol/vbdev_lvol.o 00:02:05.993 CC module/bdev/raid/bdev_raid_sb.o 00:02:05.993 CC module/bdev/error/vbdev_error.o 00:02:05.993 CC module/bdev/raid/raid0.o 00:02:05.993 CC module/bdev/iscsi/bdev_iscsi.o 00:02:05.993 CC module/bdev/raid/raid1.o 00:02:05.993 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:05.993 CC module/bdev/raid/concat.o 00:02:05.993 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:05.993 CC module/bdev/error/vbdev_error_rpc.o 00:02:05.993 CC module/bdev/gpt/gpt.o 00:02:05.993 CC module/bdev/gpt/vbdev_gpt.o 00:02:05.993 CC module/bdev/nvme/bdev_nvme.o 00:02:05.993 CC module/bdev/null/bdev_null_rpc.o 00:02:05.993 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:05.993 CC module/bdev/nvme/nvme_rpc.o 00:02:05.993 CC module/bdev/null/bdev_null.o 00:02:05.993 CC module/bdev/delay/vbdev_delay.o 00:02:05.993 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:05.993 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:05.993 CC module/bdev/nvme/bdev_mdns_client.o 00:02:05.993 CC module/bdev/nvme/vbdev_opal.o 00:02:05.993 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:05.993 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:05.993 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:05.993 CC module/bdev/malloc/bdev_malloc.o 00:02:05.993 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:05.993 CC module/bdev/aio/bdev_aio.o 00:02:05.993 CC module/bdev/aio/bdev_aio_rpc.o 00:02:05.993 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:05.993 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:05.993 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:06.250 LIB libspdk_blobfs_bdev.a 00:02:06.250 LIB libspdk_bdev_split.a 00:02:06.250 SO libspdk_blobfs_bdev.so.6.0 00:02:06.250 LIB libspdk_bdev_null.a 00:02:06.250 SO libspdk_bdev_split.so.6.0 00:02:06.250 SYMLINK libspdk_blobfs_bdev.so 00:02:06.250 LIB libspdk_bdev_gpt.a 00:02:06.250 LIB libspdk_bdev_error.a 00:02:06.250 SO libspdk_bdev_null.so.6.0 00:02:06.250 LIB libspdk_bdev_ftl.a 00:02:06.250 LIB libspdk_bdev_passthru.a 00:02:06.250 SO libspdk_bdev_gpt.so.6.0 00:02:06.250 SYMLINK libspdk_bdev_split.so 00:02:06.250 SO libspdk_bdev_error.so.6.0 00:02:06.250 SO libspdk_bdev_ftl.so.6.0 00:02:06.250 LIB libspdk_bdev_zone_block.a 00:02:06.250 SO libspdk_bdev_passthru.so.6.0 00:02:06.250 SYMLINK libspdk_bdev_null.so 00:02:06.250 LIB libspdk_bdev_iscsi.a 00:02:06.250 LIB libspdk_bdev_aio.a 00:02:06.250 SYMLINK libspdk_bdev_ftl.so 00:02:06.250 SO libspdk_bdev_zone_block.so.6.0 00:02:06.250 LIB libspdk_bdev_malloc.a 00:02:06.250 SYMLINK libspdk_bdev_error.so 00:02:06.250 SYMLINK libspdk_bdev_gpt.so 00:02:06.506 SO libspdk_bdev_iscsi.so.6.0 00:02:06.506 LIB libspdk_bdev_delay.a 00:02:06.506 SO libspdk_bdev_aio.so.6.0 00:02:06.506 SO libspdk_bdev_malloc.so.6.0 00:02:06.506 SYMLINK libspdk_bdev_passthru.so 00:02:06.506 SYMLINK libspdk_bdev_zone_block.so 00:02:06.506 SO libspdk_bdev_delay.so.6.0 00:02:06.506 LIB libspdk_bdev_lvol.a 00:02:06.506 SYMLINK libspdk_bdev_iscsi.so 00:02:06.506 SYMLINK libspdk_bdev_malloc.so 00:02:06.506 SYMLINK libspdk_bdev_aio.so 00:02:06.506 SO libspdk_bdev_lvol.so.6.0 00:02:06.506 LIB libspdk_bdev_virtio.a 00:02:06.506 SYMLINK libspdk_bdev_delay.so 00:02:06.506 SO libspdk_bdev_virtio.so.6.0 00:02:06.506 SYMLINK libspdk_bdev_lvol.so 00:02:06.506 SYMLINK libspdk_bdev_virtio.so 00:02:06.763 LIB libspdk_bdev_raid.a 00:02:06.763 SO libspdk_bdev_raid.so.6.0 00:02:06.763 SYMLINK libspdk_bdev_raid.so 00:02:07.698 LIB libspdk_bdev_nvme.a 00:02:07.698 SO libspdk_bdev_nvme.so.7.1 00:02:07.956 SYMLINK libspdk_bdev_nvme.so 00:02:08.524 CC module/event/subsystems/vmd/vmd.o 00:02:08.524 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:08.524 CC module/event/subsystems/keyring/keyring.o 00:02:08.524 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:08.524 CC module/event/subsystems/fsdev/fsdev.o 00:02:08.524 CC module/event/subsystems/sock/sock.o 00:02:08.524 CC module/event/subsystems/scheduler/scheduler.o 00:02:08.524 CC module/event/subsystems/iobuf/iobuf.o 00:02:08.524 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:08.524 LIB libspdk_event_keyring.a 00:02:08.524 LIB libspdk_event_vmd.a 00:02:08.524 LIB libspdk_event_sock.a 00:02:08.524 SO libspdk_event_vmd.so.6.0 00:02:08.524 SO libspdk_event_keyring.so.1.0 00:02:08.524 LIB libspdk_event_vhost_blk.a 00:02:08.524 LIB libspdk_event_fsdev.a 00:02:08.524 LIB libspdk_event_scheduler.a 00:02:08.524 LIB libspdk_event_iobuf.a 00:02:08.524 SO libspdk_event_vhost_blk.so.3.0 00:02:08.524 SO libspdk_event_sock.so.5.0 00:02:08.783 SO libspdk_event_fsdev.so.1.0 00:02:08.783 SO libspdk_event_scheduler.so.4.0 00:02:08.783 SYMLINK libspdk_event_vmd.so 00:02:08.783 SYMLINK libspdk_event_keyring.so 00:02:08.783 SO libspdk_event_iobuf.so.3.0 00:02:08.783 SYMLINK libspdk_event_vhost_blk.so 00:02:08.783 SYMLINK libspdk_event_sock.so 00:02:08.783 SYMLINK libspdk_event_fsdev.so 00:02:08.783 SYMLINK libspdk_event_scheduler.so 00:02:08.783 SYMLINK libspdk_event_iobuf.so 00:02:09.041 CC module/event/subsystems/accel/accel.o 00:02:09.299 LIB libspdk_event_accel.a 00:02:09.299 SO libspdk_event_accel.so.6.0 00:02:09.299 SYMLINK libspdk_event_accel.so 00:02:09.558 CC module/event/subsystems/bdev/bdev.o 00:02:09.816 LIB libspdk_event_bdev.a 00:02:09.816 SO libspdk_event_bdev.so.6.0 00:02:09.816 SYMLINK libspdk_event_bdev.so 00:02:10.075 CC module/event/subsystems/scsi/scsi.o 00:02:10.075 CC module/event/subsystems/nbd/nbd.o 00:02:10.075 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:10.075 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:10.075 CC module/event/subsystems/ublk/ublk.o 00:02:10.333 LIB libspdk_event_nbd.a 00:02:10.333 LIB libspdk_event_scsi.a 00:02:10.333 LIB libspdk_event_ublk.a 00:02:10.333 SO libspdk_event_nbd.so.6.0 00:02:10.333 SO libspdk_event_scsi.so.6.0 00:02:10.333 SO libspdk_event_ublk.so.3.0 00:02:10.333 SYMLINK libspdk_event_nbd.so 00:02:10.333 LIB libspdk_event_nvmf.a 00:02:10.333 SYMLINK libspdk_event_scsi.so 00:02:10.333 SYMLINK libspdk_event_ublk.so 00:02:10.333 SO libspdk_event_nvmf.so.6.0 00:02:10.333 SYMLINK libspdk_event_nvmf.so 00:02:10.591 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:10.591 CC module/event/subsystems/iscsi/iscsi.o 00:02:10.851 LIB libspdk_event_vhost_scsi.a 00:02:10.851 LIB libspdk_event_iscsi.a 00:02:10.851 SO libspdk_event_vhost_scsi.so.3.0 00:02:10.851 SO libspdk_event_iscsi.so.6.0 00:02:10.851 SYMLINK libspdk_event_vhost_scsi.so 00:02:10.851 SYMLINK libspdk_event_iscsi.so 00:02:11.108 SO libspdk.so.6.0 00:02:11.108 SYMLINK libspdk.so 00:02:11.367 CXX app/trace/trace.o 00:02:11.367 CC app/trace_record/trace_record.o 00:02:11.367 CC app/spdk_nvme_discover/discovery_aer.o 00:02:11.367 CC app/spdk_nvme_perf/perf.o 00:02:11.367 CC app/spdk_top/spdk_top.o 00:02:11.367 CC app/spdk_lspci/spdk_lspci.o 00:02:11.367 CC app/spdk_nvme_identify/identify.o 00:02:11.367 CC test/rpc_client/rpc_client_test.o 00:02:11.367 TEST_HEADER include/spdk/accel.h 00:02:11.367 TEST_HEADER include/spdk/accel_module.h 00:02:11.367 TEST_HEADER include/spdk/barrier.h 00:02:11.367 TEST_HEADER include/spdk/assert.h 00:02:11.367 TEST_HEADER include/spdk/bdev.h 00:02:11.367 TEST_HEADER include/spdk/base64.h 00:02:11.367 TEST_HEADER include/spdk/bdev_module.h 00:02:11.367 TEST_HEADER include/spdk/bdev_zone.h 00:02:11.367 TEST_HEADER include/spdk/bit_array.h 00:02:11.367 TEST_HEADER include/spdk/bit_pool.h 00:02:11.367 TEST_HEADER include/spdk/blob.h 00:02:11.367 TEST_HEADER include/spdk/blob_bdev.h 00:02:11.367 TEST_HEADER include/spdk/conf.h 00:02:11.367 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:11.367 TEST_HEADER include/spdk/config.h 00:02:11.367 TEST_HEADER include/spdk/blobfs.h 00:02:11.367 TEST_HEADER include/spdk/cpuset.h 00:02:11.367 TEST_HEADER include/spdk/crc32.h 00:02:11.367 TEST_HEADER include/spdk/crc16.h 00:02:11.367 TEST_HEADER include/spdk/crc64.h 00:02:11.367 TEST_HEADER include/spdk/dif.h 00:02:11.367 TEST_HEADER include/spdk/dma.h 00:02:11.367 TEST_HEADER include/spdk/env.h 00:02:11.367 TEST_HEADER include/spdk/endian.h 00:02:11.367 TEST_HEADER include/spdk/env_dpdk.h 00:02:11.367 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:11.367 TEST_HEADER include/spdk/fd.h 00:02:11.367 TEST_HEADER include/spdk/file.h 00:02:11.367 TEST_HEADER include/spdk/event.h 00:02:11.367 TEST_HEADER include/spdk/fd_group.h 00:02:11.367 TEST_HEADER include/spdk/fsdev_module.h 00:02:11.367 TEST_HEADER include/spdk/fsdev.h 00:02:11.367 TEST_HEADER include/spdk/ftl.h 00:02:11.367 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:11.367 CC app/nvmf_tgt/nvmf_main.o 00:02:11.367 TEST_HEADER include/spdk/gpt_spec.h 00:02:11.367 TEST_HEADER include/spdk/idxd.h 00:02:11.367 TEST_HEADER include/spdk/histogram_data.h 00:02:11.367 TEST_HEADER include/spdk/hexlify.h 00:02:11.367 TEST_HEADER include/spdk/idxd_spec.h 00:02:11.368 TEST_HEADER include/spdk/iscsi_spec.h 00:02:11.368 TEST_HEADER include/spdk/ioat.h 00:02:11.368 TEST_HEADER include/spdk/init.h 00:02:11.368 TEST_HEADER include/spdk/ioat_spec.h 00:02:11.368 TEST_HEADER include/spdk/json.h 00:02:11.368 CC app/spdk_dd/spdk_dd.o 00:02:11.368 TEST_HEADER include/spdk/jsonrpc.h 00:02:11.368 CC app/iscsi_tgt/iscsi_tgt.o 00:02:11.368 TEST_HEADER include/spdk/likely.h 00:02:11.368 TEST_HEADER include/spdk/keyring.h 00:02:11.368 TEST_HEADER include/spdk/keyring_module.h 00:02:11.368 TEST_HEADER include/spdk/log.h 00:02:11.368 TEST_HEADER include/spdk/lvol.h 00:02:11.368 TEST_HEADER include/spdk/mmio.h 00:02:11.368 TEST_HEADER include/spdk/memory.h 00:02:11.368 TEST_HEADER include/spdk/nbd.h 00:02:11.368 TEST_HEADER include/spdk/md5.h 00:02:11.368 CC app/spdk_tgt/spdk_tgt.o 00:02:11.368 TEST_HEADER include/spdk/net.h 00:02:11.368 TEST_HEADER include/spdk/nvme_intel.h 00:02:11.368 TEST_HEADER include/spdk/notify.h 00:02:11.368 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:11.368 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:11.368 TEST_HEADER include/spdk/nvme.h 00:02:11.368 TEST_HEADER include/spdk/nvme_spec.h 00:02:11.368 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:11.368 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:11.368 TEST_HEADER include/spdk/nvme_zns.h 00:02:11.368 TEST_HEADER include/spdk/nvmf.h 00:02:11.368 TEST_HEADER include/spdk/nvmf_spec.h 00:02:11.368 TEST_HEADER include/spdk/nvmf_transport.h 00:02:11.368 TEST_HEADER include/spdk/opal.h 00:02:11.368 TEST_HEADER include/spdk/pci_ids.h 00:02:11.368 TEST_HEADER include/spdk/pipe.h 00:02:11.368 TEST_HEADER include/spdk/queue.h 00:02:11.368 TEST_HEADER include/spdk/opal_spec.h 00:02:11.368 TEST_HEADER include/spdk/reduce.h 00:02:11.368 TEST_HEADER include/spdk/rpc.h 00:02:11.368 TEST_HEADER include/spdk/scheduler.h 00:02:11.368 TEST_HEADER include/spdk/scsi.h 00:02:11.368 TEST_HEADER include/spdk/scsi_spec.h 00:02:11.368 TEST_HEADER include/spdk/string.h 00:02:11.368 TEST_HEADER include/spdk/thread.h 00:02:11.368 TEST_HEADER include/spdk/sock.h 00:02:11.368 TEST_HEADER include/spdk/trace.h 00:02:11.368 TEST_HEADER include/spdk/stdinc.h 00:02:11.368 TEST_HEADER include/spdk/trace_parser.h 00:02:11.368 TEST_HEADER include/spdk/tree.h 00:02:11.368 TEST_HEADER include/spdk/util.h 00:02:11.368 TEST_HEADER include/spdk/ublk.h 00:02:11.368 TEST_HEADER include/spdk/uuid.h 00:02:11.368 TEST_HEADER include/spdk/version.h 00:02:11.368 TEST_HEADER include/spdk/vmd.h 00:02:11.368 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:11.368 TEST_HEADER include/spdk/vhost.h 00:02:11.368 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:11.368 TEST_HEADER include/spdk/xor.h 00:02:11.368 TEST_HEADER include/spdk/zipf.h 00:02:11.368 CXX test/cpp_headers/accel.o 00:02:11.368 CXX test/cpp_headers/accel_module.o 00:02:11.368 CXX test/cpp_headers/assert.o 00:02:11.368 CXX test/cpp_headers/barrier.o 00:02:11.368 CXX test/cpp_headers/base64.o 00:02:11.368 CXX test/cpp_headers/bdev.o 00:02:11.368 CXX test/cpp_headers/bit_array.o 00:02:11.368 CXX test/cpp_headers/bdev_module.o 00:02:11.368 CXX test/cpp_headers/bit_pool.o 00:02:11.368 CXX test/cpp_headers/bdev_zone.o 00:02:11.368 CXX test/cpp_headers/blob_bdev.o 00:02:11.368 CXX test/cpp_headers/blob.o 00:02:11.368 CXX test/cpp_headers/blobfs_bdev.o 00:02:11.368 CXX test/cpp_headers/blobfs.o 00:02:11.368 CXX test/cpp_headers/conf.o 00:02:11.368 CXX test/cpp_headers/config.o 00:02:11.368 CXX test/cpp_headers/crc32.o 00:02:11.368 CXX test/cpp_headers/crc16.o 00:02:11.368 CXX test/cpp_headers/cpuset.o 00:02:11.368 CXX test/cpp_headers/crc64.o 00:02:11.368 CXX test/cpp_headers/dif.o 00:02:11.368 CXX test/cpp_headers/dma.o 00:02:11.368 CXX test/cpp_headers/endian.o 00:02:11.368 CXX test/cpp_headers/env_dpdk.o 00:02:11.368 CXX test/cpp_headers/env.o 00:02:11.368 CXX test/cpp_headers/event.o 00:02:11.368 CXX test/cpp_headers/fd_group.o 00:02:11.368 CXX test/cpp_headers/fd.o 00:02:11.368 CXX test/cpp_headers/file.o 00:02:11.368 CXX test/cpp_headers/fsdev.o 00:02:11.368 CXX test/cpp_headers/fsdev_module.o 00:02:11.368 CXX test/cpp_headers/ftl.o 00:02:11.368 CXX test/cpp_headers/fuse_dispatcher.o 00:02:11.368 CXX test/cpp_headers/hexlify.o 00:02:11.368 CXX test/cpp_headers/gpt_spec.o 00:02:11.650 CXX test/cpp_headers/idxd.o 00:02:11.650 CXX test/cpp_headers/histogram_data.o 00:02:11.650 CXX test/cpp_headers/idxd_spec.o 00:02:11.650 CXX test/cpp_headers/init.o 00:02:11.650 CXX test/cpp_headers/iscsi_spec.o 00:02:11.650 CXX test/cpp_headers/ioat.o 00:02:11.650 CC test/env/memory/memory_ut.o 00:02:11.650 CXX test/cpp_headers/json.o 00:02:11.650 CXX test/cpp_headers/ioat_spec.o 00:02:11.650 CXX test/cpp_headers/keyring.o 00:02:11.650 CXX test/cpp_headers/jsonrpc.o 00:02:11.650 CXX test/cpp_headers/keyring_module.o 00:02:11.650 CXX test/cpp_headers/likely.o 00:02:11.650 CXX test/cpp_headers/log.o 00:02:11.650 CXX test/cpp_headers/md5.o 00:02:11.650 CC test/env/pci/pci_ut.o 00:02:11.650 CXX test/cpp_headers/lvol.o 00:02:11.650 CXX test/cpp_headers/memory.o 00:02:11.650 CXX test/cpp_headers/mmio.o 00:02:11.650 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:11.650 CXX test/cpp_headers/nbd.o 00:02:11.650 CXX test/cpp_headers/notify.o 00:02:11.650 CXX test/cpp_headers/net.o 00:02:11.650 CXX test/cpp_headers/nvme.o 00:02:11.650 CXX test/cpp_headers/nvme_intel.o 00:02:11.650 CXX test/cpp_headers/nvme_ocssd.o 00:02:11.650 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:11.650 CXX test/cpp_headers/nvme_spec.o 00:02:11.650 CC test/thread/poller_perf/poller_perf.o 00:02:11.650 CXX test/cpp_headers/nvmf_cmd.o 00:02:11.650 CXX test/cpp_headers/nvme_zns.o 00:02:11.650 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:11.650 CC examples/util/zipf/zipf.o 00:02:11.650 CXX test/cpp_headers/nvmf.o 00:02:11.650 CXX test/cpp_headers/nvmf_spec.o 00:02:11.650 CXX test/cpp_headers/nvmf_transport.o 00:02:11.650 CXX test/cpp_headers/opal.o 00:02:11.650 CXX test/cpp_headers/opal_spec.o 00:02:11.650 CXX test/cpp_headers/pci_ids.o 00:02:11.650 CXX test/cpp_headers/pipe.o 00:02:11.650 CXX test/cpp_headers/queue.o 00:02:11.650 CXX test/cpp_headers/reduce.o 00:02:11.650 CXX test/cpp_headers/rpc.o 00:02:11.650 CC test/env/vtophys/vtophys.o 00:02:11.650 CXX test/cpp_headers/scheduler.o 00:02:11.650 CXX test/cpp_headers/scsi.o 00:02:11.650 CXX test/cpp_headers/scsi_spec.o 00:02:11.650 CXX test/cpp_headers/sock.o 00:02:11.650 CXX test/cpp_headers/stdinc.o 00:02:11.650 CXX test/cpp_headers/thread.o 00:02:11.650 CXX test/cpp_headers/string.o 00:02:11.650 CC examples/ioat/perf/perf.o 00:02:11.650 CXX test/cpp_headers/trace_parser.o 00:02:11.650 CXX test/cpp_headers/trace.o 00:02:11.650 CC test/app/stub/stub.o 00:02:11.650 CXX test/cpp_headers/tree.o 00:02:11.650 CC test/app/jsoncat/jsoncat.o 00:02:11.650 CC app/fio/nvme/fio_plugin.o 00:02:11.650 CC test/app/histogram_perf/histogram_perf.o 00:02:11.650 CC examples/ioat/verify/verify.o 00:02:11.650 CC test/app/bdev_svc/bdev_svc.o 00:02:11.650 CC app/fio/bdev/fio_plugin.o 00:02:11.650 CC test/dma/test_dma/test_dma.o 00:02:11.650 LINK spdk_lspci 00:02:11.650 CXX test/cpp_headers/ublk.o 00:02:11.920 LINK nvmf_tgt 00:02:11.920 LINK spdk_nvme_discover 00:02:11.920 LINK interrupt_tgt 00:02:11.920 LINK spdk_trace_record 00:02:12.183 LINK rpc_client_test 00:02:12.183 LINK poller_perf 00:02:12.183 CC test/env/mem_callbacks/mem_callbacks.o 00:02:12.183 LINK jsoncat 00:02:12.183 LINK env_dpdk_post_init 00:02:12.183 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:12.183 LINK histogram_perf 00:02:12.183 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:12.183 CXX test/cpp_headers/util.o 00:02:12.183 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:12.183 CXX test/cpp_headers/version.o 00:02:12.183 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:12.183 CXX test/cpp_headers/uuid.o 00:02:12.183 CXX test/cpp_headers/vfio_user_pci.o 00:02:12.183 CXX test/cpp_headers/vfio_user_spec.o 00:02:12.183 CXX test/cpp_headers/vhost.o 00:02:12.183 CXX test/cpp_headers/xor.o 00:02:12.183 LINK spdk_tgt 00:02:12.183 CXX test/cpp_headers/vmd.o 00:02:12.183 CXX test/cpp_headers/zipf.o 00:02:12.183 LINK bdev_svc 00:02:12.183 LINK iscsi_tgt 00:02:12.184 LINK vtophys 00:02:12.184 LINK zipf 00:02:12.442 LINK stub 00:02:12.442 LINK verify 00:02:12.442 LINK ioat_perf 00:02:12.442 LINK spdk_trace 00:02:12.442 LINK spdk_dd 00:02:12.442 LINK pci_ut 00:02:12.442 LINK spdk_bdev 00:02:12.442 LINK test_dma 00:02:12.442 LINK nvme_fuzz 00:02:12.701 LINK spdk_nvme 00:02:12.701 LINK vhost_fuzz 00:02:12.701 LINK spdk_nvme_identify 00:02:12.701 CC test/event/reactor_perf/reactor_perf.o 00:02:12.701 CC test/event/reactor/reactor.o 00:02:12.701 CC app/vhost/vhost.o 00:02:12.701 CC test/event/event_perf/event_perf.o 00:02:12.701 CC test/event/app_repeat/app_repeat.o 00:02:12.701 CC test/event/scheduler/scheduler.o 00:02:12.701 CC examples/idxd/perf/perf.o 00:02:12.701 CC examples/vmd/lsvmd/lsvmd.o 00:02:12.701 CC examples/sock/hello_world/hello_sock.o 00:02:12.701 CC examples/vmd/led/led.o 00:02:12.701 LINK mem_callbacks 00:02:12.701 CC examples/thread/thread/thread_ex.o 00:02:12.701 LINK spdk_top 00:02:12.701 LINK spdk_nvme_perf 00:02:12.701 LINK reactor_perf 00:02:12.701 LINK reactor 00:02:12.701 LINK event_perf 00:02:12.961 LINK app_repeat 00:02:12.961 LINK lsvmd 00:02:12.961 LINK vhost 00:02:12.961 LINK led 00:02:12.961 LINK scheduler 00:02:12.961 CC test/nvme/err_injection/err_injection.o 00:02:12.961 CC test/nvme/fused_ordering/fused_ordering.o 00:02:12.961 CC test/nvme/boot_partition/boot_partition.o 00:02:12.961 CC test/nvme/startup/startup.o 00:02:12.961 CC test/nvme/reset/reset.o 00:02:12.961 CC test/nvme/e2edp/nvme_dp.o 00:02:12.961 LINK hello_sock 00:02:12.961 CC test/nvme/compliance/nvme_compliance.o 00:02:12.961 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:12.961 CC test/nvme/aer/aer.o 00:02:12.961 CC test/nvme/simple_copy/simple_copy.o 00:02:12.961 CC test/nvme/sgl/sgl.o 00:02:12.961 CC test/nvme/connect_stress/connect_stress.o 00:02:12.961 CC test/nvme/cuse/cuse.o 00:02:12.961 CC test/nvme/overhead/overhead.o 00:02:12.961 CC test/nvme/reserve/reserve.o 00:02:12.961 CC test/nvme/fdp/fdp.o 00:02:12.961 LINK memory_ut 00:02:12.961 CC test/blobfs/mkfs/mkfs.o 00:02:12.961 LINK idxd_perf 00:02:12.961 CC test/accel/dif/dif.o 00:02:12.961 LINK thread 00:02:13.220 CC test/lvol/esnap/esnap.o 00:02:13.220 LINK boot_partition 00:02:13.220 LINK fused_ordering 00:02:13.220 LINK connect_stress 00:02:13.220 LINK err_injection 00:02:13.220 LINK doorbell_aers 00:02:13.220 LINK startup 00:02:13.220 LINK reserve 00:02:13.220 LINK simple_copy 00:02:13.220 LINK nvme_dp 00:02:13.221 LINK reset 00:02:13.221 LINK mkfs 00:02:13.221 LINK nvme_compliance 00:02:13.221 LINK sgl 00:02:13.221 LINK overhead 00:02:13.221 LINK aer 00:02:13.221 LINK fdp 00:02:13.221 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:13.221 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:13.221 CC examples/nvme/abort/abort.o 00:02:13.221 CC examples/nvme/hotplug/hotplug.o 00:02:13.221 CC examples/nvme/arbitration/arbitration.o 00:02:13.478 CC examples/nvme/hello_world/hello_world.o 00:02:13.478 CC examples/nvme/reconnect/reconnect.o 00:02:13.478 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:13.478 LINK iscsi_fuzz 00:02:13.478 CC examples/accel/perf/accel_perf.o 00:02:13.478 LINK dif 00:02:13.478 CC examples/blob/cli/blobcli.o 00:02:13.478 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:13.478 LINK hello_world 00:02:13.478 LINK cmb_copy 00:02:13.478 LINK hotplug 00:02:13.478 CC examples/blob/hello_world/hello_blob.o 00:02:13.478 LINK pmr_persistence 00:02:13.478 LINK arbitration 00:02:13.478 LINK abort 00:02:13.735 LINK reconnect 00:02:13.735 LINK nvme_manage 00:02:13.735 LINK hello_blob 00:02:13.735 LINK hello_fsdev 00:02:13.735 LINK accel_perf 00:02:13.992 LINK blobcli 00:02:13.992 LINK cuse 00:02:13.992 CC test/bdev/bdevio/bdevio.o 00:02:14.250 CC examples/bdev/bdevperf/bdevperf.o 00:02:14.250 CC examples/bdev/hello_world/hello_bdev.o 00:02:14.250 LINK bdevio 00:02:14.508 LINK hello_bdev 00:02:14.766 LINK bdevperf 00:02:15.332 CC examples/nvmf/nvmf/nvmf.o 00:02:15.590 LINK nvmf 00:02:16.524 LINK esnap 00:02:16.524 00:02:16.524 real 0m50.549s 00:02:16.524 user 7m3.662s 00:02:16.524 sys 3m35.645s 00:02:16.524 16:15:11 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:16.524 16:15:11 make -- common/autotest_common.sh@10 -- $ set +x 00:02:16.524 ************************************ 00:02:16.524 END TEST make 00:02:16.524 ************************************ 00:02:16.524 16:15:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:16.524 16:15:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:16.524 16:15:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:16.524 16:15:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.524 16:15:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:16.524 16:15:11 -- pm/common@44 -- $ pid=3509846 00:02:16.524 16:15:11 -- pm/common@50 -- $ kill -TERM 3509846 00:02:16.524 16:15:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.524 16:15:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:16.524 16:15:11 -- pm/common@44 -- $ pid=3509847 00:02:16.524 16:15:11 -- pm/common@50 -- $ kill -TERM 3509847 00:02:16.524 16:15:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.524 16:15:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:16.524 16:15:11 -- pm/common@44 -- $ pid=3509849 00:02:16.524 16:15:11 -- pm/common@50 -- $ kill -TERM 3509849 00:02:16.524 16:15:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.524 16:15:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:16.524 16:15:11 -- pm/common@44 -- $ pid=3509872 00:02:16.524 16:15:11 -- pm/common@50 -- $ sudo -E kill -TERM 3509872 00:02:16.783 16:15:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:16.783 16:15:11 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:16.783 16:15:11 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:16.783 16:15:11 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:16.783 16:15:11 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:16.783 16:15:11 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:16.783 16:15:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:16.783 16:15:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:16.783 16:15:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:16.783 16:15:11 -- scripts/common.sh@336 -- # IFS=.-: 00:02:16.783 16:15:11 -- scripts/common.sh@336 -- # read -ra ver1 00:02:16.783 16:15:11 -- scripts/common.sh@337 -- # IFS=.-: 00:02:16.783 16:15:11 -- scripts/common.sh@337 -- # read -ra ver2 00:02:16.783 16:15:11 -- scripts/common.sh@338 -- # local 'op=<' 00:02:16.783 16:15:11 -- scripts/common.sh@340 -- # ver1_l=2 00:02:16.783 16:15:11 -- scripts/common.sh@341 -- # ver2_l=1 00:02:16.783 16:15:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:16.783 16:15:11 -- scripts/common.sh@344 -- # case "$op" in 00:02:16.783 16:15:11 -- scripts/common.sh@345 -- # : 1 00:02:16.783 16:15:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:16.783 16:15:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.783 16:15:11 -- scripts/common.sh@365 -- # decimal 1 00:02:16.783 16:15:11 -- scripts/common.sh@353 -- # local d=1 00:02:16.783 16:15:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:16.783 16:15:11 -- scripts/common.sh@355 -- # echo 1 00:02:16.783 16:15:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:16.783 16:15:11 -- scripts/common.sh@366 -- # decimal 2 00:02:16.783 16:15:11 -- scripts/common.sh@353 -- # local d=2 00:02:16.783 16:15:11 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:16.783 16:15:11 -- scripts/common.sh@355 -- # echo 2 00:02:16.783 16:15:11 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:16.783 16:15:11 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:16.783 16:15:11 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:16.783 16:15:11 -- scripts/common.sh@368 -- # return 0 00:02:16.783 16:15:11 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:16.783 16:15:11 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:16.783 --rc genhtml_branch_coverage=1 00:02:16.783 --rc genhtml_function_coverage=1 00:02:16.783 --rc genhtml_legend=1 00:02:16.783 --rc geninfo_all_blocks=1 00:02:16.783 --rc geninfo_unexecuted_blocks=1 00:02:16.783 00:02:16.783 ' 00:02:16.783 16:15:11 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:16.783 --rc genhtml_branch_coverage=1 00:02:16.783 --rc genhtml_function_coverage=1 00:02:16.783 --rc genhtml_legend=1 00:02:16.783 --rc geninfo_all_blocks=1 00:02:16.783 --rc geninfo_unexecuted_blocks=1 00:02:16.783 00:02:16.783 ' 00:02:16.783 16:15:11 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:16.783 --rc genhtml_branch_coverage=1 00:02:16.783 --rc genhtml_function_coverage=1 00:02:16.783 --rc genhtml_legend=1 00:02:16.783 --rc geninfo_all_blocks=1 00:02:16.783 --rc geninfo_unexecuted_blocks=1 00:02:16.783 00:02:16.783 ' 00:02:16.783 16:15:11 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:16.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:16.783 --rc genhtml_branch_coverage=1 00:02:16.783 --rc genhtml_function_coverage=1 00:02:16.783 --rc genhtml_legend=1 00:02:16.783 --rc geninfo_all_blocks=1 00:02:16.783 --rc geninfo_unexecuted_blocks=1 00:02:16.783 00:02:16.783 ' 00:02:16.783 16:15:11 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:16.783 16:15:11 -- nvmf/common.sh@7 -- # uname -s 00:02:16.783 16:15:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:16.783 16:15:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:16.783 16:15:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:16.783 16:15:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:16.783 16:15:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:16.783 16:15:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:16.783 16:15:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:16.783 16:15:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:16.783 16:15:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:16.783 16:15:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:16.783 16:15:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:02:16.783 16:15:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:02:16.783 16:15:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:16.783 16:15:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:16.783 16:15:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:16.783 16:15:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:16.783 16:15:11 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:16.783 16:15:11 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:16.783 16:15:11 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:16.783 16:15:11 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.783 16:15:11 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.783 16:15:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.783 16:15:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.783 16:15:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.783 16:15:11 -- paths/export.sh@5 -- # export PATH 00:02:16.783 16:15:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.783 16:15:11 -- nvmf/common.sh@51 -- # : 0 00:02:16.783 16:15:11 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:16.783 16:15:11 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:16.783 16:15:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:16.783 16:15:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:16.783 16:15:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:16.783 16:15:11 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:16.783 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:16.783 16:15:11 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:16.783 16:15:11 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:16.783 16:15:11 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:16.783 16:15:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:16.784 16:15:11 -- spdk/autotest.sh@32 -- # uname -s 00:02:16.784 16:15:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:16.784 16:15:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:16.784 16:15:11 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:16.784 16:15:11 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:16.784 16:15:11 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:16.784 16:15:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:16.784 16:15:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:16.784 16:15:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:16.784 16:15:11 -- spdk/autotest.sh@48 -- # udevadm_pid=3572576 00:02:16.784 16:15:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:16.784 16:15:11 -- pm/common@17 -- # local monitor 00:02:16.784 16:15:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.784 16:15:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.784 16:15:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:16.784 16:15:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.784 16:15:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.784 16:15:11 -- pm/common@25 -- # sleep 1 00:02:16.784 16:15:11 -- pm/common@21 -- # date +%s 00:02:16.784 16:15:11 -- pm/common@21 -- # date +%s 00:02:16.784 16:15:11 -- pm/common@21 -- # date +%s 00:02:16.784 16:15:11 -- pm/common@21 -- # date +%s 00:02:16.784 16:15:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733498111 00:02:16.784 16:15:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733498111 00:02:16.784 16:15:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733498111 00:02:16.784 16:15:11 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733498111 00:02:16.784 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733498111_collect-cpu-load.pm.log 00:02:16.784 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733498111_collect-vmstat.pm.log 00:02:16.784 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733498111_collect-cpu-temp.pm.log 00:02:17.042 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733498111_collect-bmc-pm.bmc.pm.log 00:02:17.980 16:15:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:17.980 16:15:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:17.980 16:15:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:17.980 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:02:17.980 16:15:12 -- spdk/autotest.sh@59 -- # create_test_list 00:02:17.980 16:15:12 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:17.980 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:02:17.980 16:15:12 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:17.980 16:15:12 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:17.980 16:15:12 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:17.980 16:15:12 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:17.980 16:15:12 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:17.980 16:15:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:17.980 16:15:12 -- common/autotest_common.sh@1457 -- # uname 00:02:17.980 16:15:12 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:17.980 16:15:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:17.980 16:15:12 -- common/autotest_common.sh@1477 -- # uname 00:02:17.980 16:15:12 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:17.980 16:15:12 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:17.980 16:15:12 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:17.980 lcov: LCOV version 1.15 00:02:17.980 16:15:12 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:36.177 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:36.177 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:41.452 16:15:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:41.452 16:15:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:41.452 16:15:35 -- common/autotest_common.sh@10 -- # set +x 00:02:41.452 16:15:35 -- spdk/autotest.sh@78 -- # rm -f 00:02:41.452 16:15:35 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.357 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:43.357 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:44.733 16:15:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:44.733 16:15:39 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:44.733 16:15:39 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:44.733 16:15:39 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:02:44.733 16:15:39 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:02:44.733 16:15:39 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:02:44.733 16:15:39 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:02:44.733 16:15:39 -- common/autotest_common.sh@1669 -- # bdf=0000:d8:00.0 00:02:44.733 16:15:39 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:02:44.733 16:15:39 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:44.733 16:15:39 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:44.733 16:15:39 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:44.733 16:15:39 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:44.733 16:15:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:44.733 16:15:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:44.733 16:15:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:44.733 16:15:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:44.733 16:15:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:44.733 16:15:39 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:44.733 No valid GPT data, bailing 00:02:44.733 16:15:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:44.733 16:15:39 -- scripts/common.sh@394 -- # pt= 00:02:44.733 16:15:39 -- scripts/common.sh@395 -- # return 1 00:02:44.733 16:15:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:44.733 1+0 records in 00:02:44.733 1+0 records out 00:02:44.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613237 s, 171 MB/s 00:02:44.733 16:15:39 -- spdk/autotest.sh@105 -- # sync 00:02:44.733 16:15:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:44.733 16:15:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:44.734 16:15:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:50.009 16:15:44 -- spdk/autotest.sh@111 -- # uname -s 00:02:50.009 16:15:44 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:50.009 16:15:44 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:50.009 16:15:44 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:52.543 Hugepages 00:02:52.543 node hugesize free / total 00:02:52.543 node0 1048576kB 0 / 0 00:02:52.543 node0 2048kB 0 / 0 00:02:52.543 node1 1048576kB 0 / 0 00:02:52.543 node1 2048kB 0 / 0 00:02:52.543 00:02:52.543 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:52.543 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:52.543 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:52.543 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:52.543 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:52.543 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:52.543 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:52.543 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:52.543 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:52.543 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:52.543 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:52.543 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:52.543 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:52.543 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:52.543 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:52.543 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:52.543 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:52.543 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:52.543 16:15:46 -- spdk/autotest.sh@117 -- # uname -s 00:02:52.543 16:15:46 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:52.543 16:15:46 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:52.543 16:15:46 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:55.076 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:55.076 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:58.364 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:02:59.761 16:15:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:00.699 16:15:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:00.700 16:15:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:00.700 16:15:55 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:00.700 16:15:55 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:00.700 16:15:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:00.700 16:15:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:00.700 16:15:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:00.700 16:15:55 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:00.700 16:15:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:00.973 16:15:55 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:00.973 16:15:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:00.973 16:15:55 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.875 Waiting for block devices as requested 00:03:03.134 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:03.134 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:03.134 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:03.134 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:03.394 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:03.394 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:03.394 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:03.394 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:03.653 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:03.653 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:03.653 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:03.653 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:03.912 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:03.912 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:03.912 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:04.172 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:04.172 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:03:05.552 16:16:00 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:05.552 16:16:00 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:03:05.552 16:16:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:05.552 16:16:00 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:03:05.552 16:16:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:03:05.552 16:16:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:03:05.552 16:16:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:03:05.552 16:16:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:05.552 16:16:00 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:05.552 16:16:00 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:05.552 16:16:00 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:05.552 16:16:00 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:05.552 16:16:00 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:05.552 16:16:00 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:05.552 16:16:00 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:05.552 16:16:00 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:05.552 16:16:00 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:05.552 16:16:00 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:05.552 16:16:00 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:05.552 16:16:00 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:05.552 16:16:00 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:05.552 16:16:00 -- common/autotest_common.sh@1543 -- # continue 00:03:05.552 16:16:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:05.552 16:16:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:05.552 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:03:05.811 16:16:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:05.811 16:16:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:05.811 16:16:00 -- common/autotest_common.sh@10 -- # set +x 00:03:05.811 16:16:00 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:08.347 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:08.347 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.638 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:13.011 16:16:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:13.011 16:16:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:13.011 16:16:07 -- common/autotest_common.sh@10 -- # set +x 00:03:13.011 16:16:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:13.011 16:16:07 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:13.011 16:16:07 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:13.011 16:16:07 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:13.011 16:16:07 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:13.011 16:16:07 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:13.011 16:16:07 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:13.011 16:16:07 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:13.011 16:16:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:13.011 16:16:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:13.011 16:16:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:13.011 16:16:07 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:13.011 16:16:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:13.011 16:16:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:13.012 16:16:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:13.012 16:16:07 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:13.012 16:16:07 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:03:13.012 16:16:07 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:13.012 16:16:07 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:13.012 16:16:07 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:13.012 16:16:07 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:13.012 16:16:07 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:03:13.012 16:16:07 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:03:13.012 16:16:07 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3588168 00:03:13.012 16:16:07 -- common/autotest_common.sh@1585 -- # waitforlisten 3588168 00:03:13.012 16:16:07 -- common/autotest_common.sh@835 -- # '[' -z 3588168 ']' 00:03:13.012 16:16:07 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:13.012 16:16:07 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:13.012 16:16:07 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:13.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:13.012 16:16:07 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:13.012 16:16:07 -- common/autotest_common.sh@10 -- # set +x 00:03:13.012 16:16:07 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:13.012 [2024-12-06 16:16:07.691976] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:03:13.012 [2024-12-06 16:16:07.692022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588168 ] 00:03:13.270 [2024-12-06 16:16:07.748479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:13.270 [2024-12-06 16:16:07.787309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:13.270 16:16:07 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:13.270 16:16:07 -- common/autotest_common.sh@868 -- # return 0 00:03:13.270 16:16:07 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:13.270 16:16:07 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:13.270 16:16:07 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:03:16.553 nvme0n1 00:03:16.553 16:16:10 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:16.553 [2024-12-06 16:16:11.142215] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:16.553 request: 00:03:16.553 { 00:03:16.553 "nvme_ctrlr_name": "nvme0", 00:03:16.553 "password": "test", 00:03:16.553 "method": "bdev_nvme_opal_revert", 00:03:16.553 "req_id": 1 00:03:16.553 } 00:03:16.553 Got JSON-RPC error response 00:03:16.553 response: 00:03:16.553 { 00:03:16.553 "code": -32602, 00:03:16.553 "message": "Invalid parameters" 00:03:16.553 } 00:03:16.553 16:16:11 -- common/autotest_common.sh@1591 -- # true 00:03:16.553 16:16:11 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:16.553 16:16:11 -- common/autotest_common.sh@1595 -- # killprocess 3588168 00:03:16.553 16:16:11 -- common/autotest_common.sh@954 -- # '[' -z 3588168 ']' 00:03:16.553 16:16:11 -- common/autotest_common.sh@958 -- # kill -0 3588168 00:03:16.553 16:16:11 -- common/autotest_common.sh@959 -- # uname 00:03:16.553 16:16:11 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:16.553 16:16:11 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3588168 00:03:16.553 16:16:11 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:16.553 16:16:11 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:16.553 16:16:11 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3588168' 00:03:16.553 killing process with pid 3588168 00:03:16.553 16:16:11 -- common/autotest_common.sh@973 -- # kill 3588168 00:03:16.553 16:16:11 -- common/autotest_common.sh@978 -- # wait 3588168 00:03:20.744 16:16:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:20.744 16:16:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:20.744 16:16:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:20.744 16:16:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:20.744 16:16:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:20.744 16:16:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:20.744 16:16:15 -- common/autotest_common.sh@10 -- # set +x 00:03:20.744 16:16:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:20.744 16:16:15 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:20.744 16:16:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.744 16:16:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.744 16:16:15 -- common/autotest_common.sh@10 -- # set +x 00:03:20.744 ************************************ 00:03:20.744 START TEST env 00:03:20.744 ************************************ 00:03:20.744 16:16:15 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:20.744 * Looking for test storage... 00:03:20.744 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:20.744 16:16:15 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:20.744 16:16:15 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:20.744 16:16:15 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:20.744 16:16:15 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:20.744 16:16:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:20.744 16:16:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:20.744 16:16:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:20.744 16:16:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.744 16:16:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:20.744 16:16:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:20.744 16:16:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:20.744 16:16:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:20.744 16:16:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:20.744 16:16:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:20.744 16:16:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:20.744 16:16:15 env -- scripts/common.sh@344 -- # case "$op" in 00:03:20.744 16:16:15 env -- scripts/common.sh@345 -- # : 1 00:03:20.744 16:16:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:20.745 16:16:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.745 16:16:15 env -- scripts/common.sh@365 -- # decimal 1 00:03:20.745 16:16:15 env -- scripts/common.sh@353 -- # local d=1 00:03:20.745 16:16:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.745 16:16:15 env -- scripts/common.sh@355 -- # echo 1 00:03:20.745 16:16:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:20.745 16:16:15 env -- scripts/common.sh@366 -- # decimal 2 00:03:20.745 16:16:15 env -- scripts/common.sh@353 -- # local d=2 00:03:20.745 16:16:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.745 16:16:15 env -- scripts/common.sh@355 -- # echo 2 00:03:20.745 16:16:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:20.745 16:16:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:20.745 16:16:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:20.745 16:16:15 env -- scripts/common.sh@368 -- # return 0 00:03:20.745 16:16:15 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.745 16:16:15 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.745 --rc genhtml_branch_coverage=1 00:03:20.745 --rc genhtml_function_coverage=1 00:03:20.745 --rc genhtml_legend=1 00:03:20.745 --rc geninfo_all_blocks=1 00:03:20.745 --rc geninfo_unexecuted_blocks=1 00:03:20.745 00:03:20.745 ' 00:03:20.745 16:16:15 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.745 --rc genhtml_branch_coverage=1 00:03:20.745 --rc genhtml_function_coverage=1 00:03:20.745 --rc genhtml_legend=1 00:03:20.745 --rc geninfo_all_blocks=1 00:03:20.745 --rc geninfo_unexecuted_blocks=1 00:03:20.745 00:03:20.745 ' 00:03:20.745 16:16:15 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.745 --rc genhtml_branch_coverage=1 00:03:20.745 --rc genhtml_function_coverage=1 00:03:20.745 --rc genhtml_legend=1 00:03:20.745 --rc geninfo_all_blocks=1 00:03:20.745 --rc geninfo_unexecuted_blocks=1 00:03:20.745 00:03:20.745 ' 00:03:20.745 16:16:15 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.745 --rc genhtml_branch_coverage=1 00:03:20.745 --rc genhtml_function_coverage=1 00:03:20.745 --rc genhtml_legend=1 00:03:20.745 --rc geninfo_all_blocks=1 00:03:20.745 --rc geninfo_unexecuted_blocks=1 00:03:20.745 00:03:20.745 ' 00:03:20.745 16:16:15 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:20.745 16:16:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.745 16:16:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.745 16:16:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.745 ************************************ 00:03:20.745 START TEST env_memory 00:03:20.745 ************************************ 00:03:20.745 16:16:15 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:20.745 00:03:20.745 00:03:20.745 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.745 http://cunit.sourceforge.net/ 00:03:20.745 00:03:20.745 00:03:20.745 Suite: memory 00:03:20.745 Test: alloc and free memory map ...[2024-12-06 16:16:15.407504] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:20.745 passed 00:03:20.745 Test: mem map translation ...[2024-12-06 16:16:15.424403] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:20.745 [2024-12-06 16:16:15.424416] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:20.745 [2024-12-06 16:16:15.424447] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:20.745 [2024-12-06 16:16:15.424453] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:20.745 passed 00:03:20.745 Test: mem map registration ...[2024-12-06 16:16:15.457595] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:20.745 [2024-12-06 16:16:15.457614] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:20.745 passed 00:03:21.004 Test: mem map adjacent registrations ...passed 00:03:21.004 00:03:21.004 Run Summary: Type Total Ran Passed Failed Inactive 00:03:21.004 suites 1 1 n/a 0 0 00:03:21.004 tests 4 4 4 0 0 00:03:21.004 asserts 152 152 152 0 n/a 00:03:21.004 00:03:21.004 Elapsed time = 0.125 seconds 00:03:21.004 00:03:21.004 real 0m0.137s 00:03:21.004 user 0m0.129s 00:03:21.004 sys 0m0.007s 00:03:21.004 16:16:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:21.004 16:16:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:21.004 ************************************ 00:03:21.004 END TEST env_memory 00:03:21.004 ************************************ 00:03:21.004 16:16:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:21.004 16:16:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.004 16:16:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.004 16:16:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:21.004 ************************************ 00:03:21.004 START TEST env_vtophys 00:03:21.004 ************************************ 00:03:21.004 16:16:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:21.004 EAL: lib.eal log level changed from notice to debug 00:03:21.004 EAL: Detected lcore 0 as core 0 on socket 0 00:03:21.004 EAL: Detected lcore 1 as core 1 on socket 0 00:03:21.004 EAL: Detected lcore 2 as core 2 on socket 0 00:03:21.004 EAL: Detected lcore 3 as core 3 on socket 0 00:03:21.004 EAL: Detected lcore 4 as core 4 on socket 0 00:03:21.004 EAL: Detected lcore 5 as core 5 on socket 0 00:03:21.004 EAL: Detected lcore 6 as core 6 on socket 0 00:03:21.004 EAL: Detected lcore 7 as core 8 on socket 0 00:03:21.004 EAL: Detected lcore 8 as core 9 on socket 0 00:03:21.004 EAL: Detected lcore 9 as core 10 on socket 0 00:03:21.004 EAL: Detected lcore 10 as core 11 on socket 0 00:03:21.004 EAL: Detected lcore 11 as core 12 on socket 0 00:03:21.004 EAL: Detected lcore 12 as core 13 on socket 0 00:03:21.004 EAL: Detected lcore 13 as core 14 on socket 0 00:03:21.004 EAL: Detected lcore 14 as core 16 on socket 0 00:03:21.004 EAL: Detected lcore 15 as core 17 on socket 0 00:03:21.004 EAL: Detected lcore 16 as core 18 on socket 0 00:03:21.004 EAL: Detected lcore 17 as core 19 on socket 0 00:03:21.004 EAL: Detected lcore 18 as core 20 on socket 0 00:03:21.004 EAL: Detected lcore 19 as core 21 on socket 0 00:03:21.004 EAL: Detected lcore 20 as core 22 on socket 0 00:03:21.004 EAL: Detected lcore 21 as core 24 on socket 0 00:03:21.004 EAL: Detected lcore 22 as core 25 on socket 0 00:03:21.004 EAL: Detected lcore 23 as core 26 on socket 0 00:03:21.004 EAL: Detected lcore 24 as core 27 on socket 0 00:03:21.004 EAL: Detected lcore 25 as core 28 on socket 0 00:03:21.004 EAL: Detected lcore 26 as core 29 on socket 0 00:03:21.004 EAL: Detected lcore 27 as core 30 on socket 0 00:03:21.004 EAL: Detected lcore 28 as core 0 on socket 1 00:03:21.004 EAL: Detected lcore 29 as core 1 on socket 1 00:03:21.004 EAL: Detected lcore 30 as core 2 on socket 1 00:03:21.004 EAL: Detected lcore 31 as core 3 on socket 1 00:03:21.004 EAL: Detected lcore 32 as core 4 on socket 1 00:03:21.004 EAL: Detected lcore 33 as core 5 on socket 1 00:03:21.004 EAL: Detected lcore 34 as core 6 on socket 1 00:03:21.004 EAL: Detected lcore 35 as core 8 on socket 1 00:03:21.004 EAL: Detected lcore 36 as core 9 on socket 1 00:03:21.005 EAL: Detected lcore 37 as core 10 on socket 1 00:03:21.005 EAL: Detected lcore 38 as core 11 on socket 1 00:03:21.005 EAL: Detected lcore 39 as core 12 on socket 1 00:03:21.005 EAL: Detected lcore 40 as core 13 on socket 1 00:03:21.005 EAL: Detected lcore 41 as core 14 on socket 1 00:03:21.005 EAL: Detected lcore 42 as core 16 on socket 1 00:03:21.005 EAL: Detected lcore 43 as core 17 on socket 1 00:03:21.005 EAL: Detected lcore 44 as core 18 on socket 1 00:03:21.005 EAL: Detected lcore 45 as core 19 on socket 1 00:03:21.005 EAL: Detected lcore 46 as core 20 on socket 1 00:03:21.005 EAL: Detected lcore 47 as core 21 on socket 1 00:03:21.005 EAL: Detected lcore 48 as core 22 on socket 1 00:03:21.005 EAL: Detected lcore 49 as core 24 on socket 1 00:03:21.005 EAL: Detected lcore 50 as core 25 on socket 1 00:03:21.005 EAL: Detected lcore 51 as core 26 on socket 1 00:03:21.005 EAL: Detected lcore 52 as core 27 on socket 1 00:03:21.005 EAL: Detected lcore 53 as core 28 on socket 1 00:03:21.005 EAL: Detected lcore 54 as core 29 on socket 1 00:03:21.005 EAL: Detected lcore 55 as core 30 on socket 1 00:03:21.005 EAL: Detected lcore 56 as core 0 on socket 0 00:03:21.005 EAL: Detected lcore 57 as core 1 on socket 0 00:03:21.005 EAL: Detected lcore 58 as core 2 on socket 0 00:03:21.005 EAL: Detected lcore 59 as core 3 on socket 0 00:03:21.005 EAL: Detected lcore 60 as core 4 on socket 0 00:03:21.005 EAL: Detected lcore 61 as core 5 on socket 0 00:03:21.005 EAL: Detected lcore 62 as core 6 on socket 0 00:03:21.005 EAL: Detected lcore 63 as core 8 on socket 0 00:03:21.005 EAL: Detected lcore 64 as core 9 on socket 0 00:03:21.005 EAL: Detected lcore 65 as core 10 on socket 0 00:03:21.005 EAL: Detected lcore 66 as core 11 on socket 0 00:03:21.005 EAL: Detected lcore 67 as core 12 on socket 0 00:03:21.005 EAL: Detected lcore 68 as core 13 on socket 0 00:03:21.005 EAL: Detected lcore 69 as core 14 on socket 0 00:03:21.005 EAL: Detected lcore 70 as core 16 on socket 0 00:03:21.005 EAL: Detected lcore 71 as core 17 on socket 0 00:03:21.005 EAL: Detected lcore 72 as core 18 on socket 0 00:03:21.005 EAL: Detected lcore 73 as core 19 on socket 0 00:03:21.005 EAL: Detected lcore 74 as core 20 on socket 0 00:03:21.005 EAL: Detected lcore 75 as core 21 on socket 0 00:03:21.005 EAL: Detected lcore 76 as core 22 on socket 0 00:03:21.005 EAL: Detected lcore 77 as core 24 on socket 0 00:03:21.005 EAL: Detected lcore 78 as core 25 on socket 0 00:03:21.005 EAL: Detected lcore 79 as core 26 on socket 0 00:03:21.005 EAL: Detected lcore 80 as core 27 on socket 0 00:03:21.005 EAL: Detected lcore 81 as core 28 on socket 0 00:03:21.005 EAL: Detected lcore 82 as core 29 on socket 0 00:03:21.005 EAL: Detected lcore 83 as core 30 on socket 0 00:03:21.005 EAL: Detected lcore 84 as core 0 on socket 1 00:03:21.005 EAL: Detected lcore 85 as core 1 on socket 1 00:03:21.005 EAL: Detected lcore 86 as core 2 on socket 1 00:03:21.005 EAL: Detected lcore 87 as core 3 on socket 1 00:03:21.005 EAL: Detected lcore 88 as core 4 on socket 1 00:03:21.005 EAL: Detected lcore 89 as core 5 on socket 1 00:03:21.005 EAL: Detected lcore 90 as core 6 on socket 1 00:03:21.005 EAL: Detected lcore 91 as core 8 on socket 1 00:03:21.005 EAL: Detected lcore 92 as core 9 on socket 1 00:03:21.005 EAL: Detected lcore 93 as core 10 on socket 1 00:03:21.005 EAL: Detected lcore 94 as core 11 on socket 1 00:03:21.005 EAL: Detected lcore 95 as core 12 on socket 1 00:03:21.005 EAL: Detected lcore 96 as core 13 on socket 1 00:03:21.005 EAL: Detected lcore 97 as core 14 on socket 1 00:03:21.005 EAL: Detected lcore 98 as core 16 on socket 1 00:03:21.005 EAL: Detected lcore 99 as core 17 on socket 1 00:03:21.005 EAL: Detected lcore 100 as core 18 on socket 1 00:03:21.005 EAL: Detected lcore 101 as core 19 on socket 1 00:03:21.005 EAL: Detected lcore 102 as core 20 on socket 1 00:03:21.005 EAL: Detected lcore 103 as core 21 on socket 1 00:03:21.005 EAL: Detected lcore 104 as core 22 on socket 1 00:03:21.005 EAL: Detected lcore 105 as core 24 on socket 1 00:03:21.005 EAL: Detected lcore 106 as core 25 on socket 1 00:03:21.005 EAL: Detected lcore 107 as core 26 on socket 1 00:03:21.005 EAL: Detected lcore 108 as core 27 on socket 1 00:03:21.005 EAL: Detected lcore 109 as core 28 on socket 1 00:03:21.005 EAL: Detected lcore 110 as core 29 on socket 1 00:03:21.005 EAL: Detected lcore 111 as core 30 on socket 1 00:03:21.005 EAL: Maximum logical cores by configuration: 128 00:03:21.005 EAL: Detected CPU lcores: 112 00:03:21.005 EAL: Detected NUMA nodes: 2 00:03:21.005 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:21.005 EAL: Detected shared linkage of DPDK 00:03:21.005 EAL: No shared files mode enabled, IPC will be disabled 00:03:21.005 EAL: Bus pci wants IOVA as 'DC' 00:03:21.005 EAL: Buses did not request a specific IOVA mode. 00:03:21.005 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:21.005 EAL: Selected IOVA mode 'VA' 00:03:21.005 EAL: Probing VFIO support... 00:03:21.005 EAL: IOMMU type 1 (Type 1) is supported 00:03:21.005 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:21.005 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:21.005 EAL: VFIO support initialized 00:03:21.005 EAL: Ask a virtual area of 0x2e000 bytes 00:03:21.005 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:21.005 EAL: Setting up physically contiguous memory... 00:03:21.005 EAL: Setting maximum number of open files to 524288 00:03:21.005 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:21.005 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:21.005 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:21.005 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.005 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:21.005 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.005 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.005 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:21.005 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:21.005 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.005 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:21.005 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.005 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.005 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:21.005 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:21.005 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.005 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:21.005 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.005 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.005 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:21.005 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:21.005 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.005 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:21.005 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.005 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.005 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:21.005 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:21.005 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:21.005 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.005 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:21.005 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.005 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.005 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:21.005 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:21.005 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.005 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:21.005 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.005 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.005 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:21.005 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:21.005 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.005 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:21.005 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.005 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.005 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:21.005 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:21.005 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.005 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:21.005 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.005 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.005 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:21.005 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:21.005 EAL: Hugepages will be freed exactly as allocated. 00:03:21.005 EAL: No shared files mode enabled, IPC is disabled 00:03:21.005 EAL: No shared files mode enabled, IPC is disabled 00:03:21.005 EAL: TSC frequency is ~2700000 KHz 00:03:21.005 EAL: Main lcore 0 is ready (tid=7f8514a1fa00;cpuset=[0]) 00:03:21.005 EAL: Trying to obtain current memory policy. 00:03:21.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.005 EAL: Restoring previous memory policy: 0 00:03:21.005 EAL: request: mp_malloc_sync 00:03:21.005 EAL: No shared files mode enabled, IPC is disabled 00:03:21.005 EAL: Heap on socket 0 was expanded by 2MB 00:03:21.005 EAL: No shared files mode enabled, IPC is disabled 00:03:21.005 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:21.005 EAL: Mem event callback 'spdk:(nil)' registered 00:03:21.005 00:03:21.005 00:03:21.005 CUnit - A unit testing framework for C - Version 2.1-3 00:03:21.005 http://cunit.sourceforge.net/ 00:03:21.005 00:03:21.005 00:03:21.005 Suite: components_suite 00:03:21.005 Test: vtophys_malloc_test ...passed 00:03:21.005 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:21.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.005 EAL: Restoring previous memory policy: 4 00:03:21.005 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.005 EAL: request: mp_malloc_sync 00:03:21.005 EAL: No shared files mode enabled, IPC is disabled 00:03:21.005 EAL: Heap on socket 0 was expanded by 4MB 00:03:21.005 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.005 EAL: request: mp_malloc_sync 00:03:21.005 EAL: No shared files mode enabled, IPC is disabled 00:03:21.005 EAL: Heap on socket 0 was shrunk by 4MB 00:03:21.005 EAL: Trying to obtain current memory policy. 00:03:21.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.005 EAL: Restoring previous memory policy: 4 00:03:21.005 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.005 EAL: request: mp_malloc_sync 00:03:21.005 EAL: No shared files mode enabled, IPC is disabled 00:03:21.005 EAL: Heap on socket 0 was expanded by 6MB 00:03:21.005 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.006 EAL: request: mp_malloc_sync 00:03:21.006 EAL: No shared files mode enabled, IPC is disabled 00:03:21.006 EAL: Heap on socket 0 was shrunk by 6MB 00:03:21.006 EAL: Trying to obtain current memory policy. 00:03:21.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.006 EAL: Restoring previous memory policy: 4 00:03:21.006 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.006 EAL: request: mp_malloc_sync 00:03:21.006 EAL: No shared files mode enabled, IPC is disabled 00:03:21.006 EAL: Heap on socket 0 was expanded by 10MB 00:03:21.006 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.006 EAL: request: mp_malloc_sync 00:03:21.006 EAL: No shared files mode enabled, IPC is disabled 00:03:21.006 EAL: Heap on socket 0 was shrunk by 10MB 00:03:21.006 EAL: Trying to obtain current memory policy. 00:03:21.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.006 EAL: Restoring previous memory policy: 4 00:03:21.006 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.006 EAL: request: mp_malloc_sync 00:03:21.006 EAL: No shared files mode enabled, IPC is disabled 00:03:21.006 EAL: Heap on socket 0 was expanded by 18MB 00:03:21.006 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.006 EAL: request: mp_malloc_sync 00:03:21.006 EAL: No shared files mode enabled, IPC is disabled 00:03:21.006 EAL: Heap on socket 0 was shrunk by 18MB 00:03:21.006 EAL: Trying to obtain current memory policy. 00:03:21.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.006 EAL: Restoring previous memory policy: 4 00:03:21.006 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.006 EAL: request: mp_malloc_sync 00:03:21.006 EAL: No shared files mode enabled, IPC is disabled 00:03:21.006 EAL: Heap on socket 0 was expanded by 34MB 00:03:21.006 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.006 EAL: request: mp_malloc_sync 00:03:21.006 EAL: No shared files mode enabled, IPC is disabled 00:03:21.006 EAL: Heap on socket 0 was shrunk by 34MB 00:03:21.006 EAL: Trying to obtain current memory policy. 00:03:21.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.006 EAL: Restoring previous memory policy: 4 00:03:21.006 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.006 EAL: request: mp_malloc_sync 00:03:21.006 EAL: No shared files mode enabled, IPC is disabled 00:03:21.006 EAL: Heap on socket 0 was expanded by 66MB 00:03:21.006 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.006 EAL: request: mp_malloc_sync 00:03:21.006 EAL: No shared files mode enabled, IPC is disabled 00:03:21.006 EAL: Heap on socket 0 was shrunk by 66MB 00:03:21.006 EAL: Trying to obtain current memory policy. 00:03:21.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.006 EAL: Restoring previous memory policy: 4 00:03:21.006 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.006 EAL: request: mp_malloc_sync 00:03:21.006 EAL: No shared files mode enabled, IPC is disabled 00:03:21.006 EAL: Heap on socket 0 was expanded by 130MB 00:03:21.264 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.264 EAL: request: mp_malloc_sync 00:03:21.264 EAL: No shared files mode enabled, IPC is disabled 00:03:21.264 EAL: Heap on socket 0 was shrunk by 130MB 00:03:21.264 EAL: Trying to obtain current memory policy. 00:03:21.264 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.264 EAL: Restoring previous memory policy: 4 00:03:21.264 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.264 EAL: request: mp_malloc_sync 00:03:21.264 EAL: No shared files mode enabled, IPC is disabled 00:03:21.264 EAL: Heap on socket 0 was expanded by 258MB 00:03:21.264 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.264 EAL: request: mp_malloc_sync 00:03:21.264 EAL: No shared files mode enabled, IPC is disabled 00:03:21.264 EAL: Heap on socket 0 was shrunk by 258MB 00:03:21.264 EAL: Trying to obtain current memory policy. 00:03:21.264 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.264 EAL: Restoring previous memory policy: 4 00:03:21.264 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.264 EAL: request: mp_malloc_sync 00:03:21.264 EAL: No shared files mode enabled, IPC is disabled 00:03:21.264 EAL: Heap on socket 0 was expanded by 514MB 00:03:21.523 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.523 EAL: request: mp_malloc_sync 00:03:21.523 EAL: No shared files mode enabled, IPC is disabled 00:03:21.523 EAL: Heap on socket 0 was shrunk by 514MB 00:03:21.523 EAL: Trying to obtain current memory policy. 00:03:21.523 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.782 EAL: Restoring previous memory policy: 4 00:03:21.782 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.782 EAL: request: mp_malloc_sync 00:03:21.782 EAL: No shared files mode enabled, IPC is disabled 00:03:21.782 EAL: Heap on socket 0 was expanded by 1026MB 00:03:21.782 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.041 EAL: request: mp_malloc_sync 00:03:22.041 EAL: No shared files mode enabled, IPC is disabled 00:03:22.041 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:22.041 passed 00:03:22.041 00:03:22.041 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.041 suites 1 1 n/a 0 0 00:03:22.041 tests 2 2 2 0 0 00:03:22.041 asserts 497 497 497 0 n/a 00:03:22.041 00:03:22.041 Elapsed time = 0.946 seconds 00:03:22.041 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.041 EAL: request: mp_malloc_sync 00:03:22.041 EAL: No shared files mode enabled, IPC is disabled 00:03:22.041 EAL: Heap on socket 0 was shrunk by 2MB 00:03:22.041 EAL: No shared files mode enabled, IPC is disabled 00:03:22.041 EAL: No shared files mode enabled, IPC is disabled 00:03:22.041 EAL: No shared files mode enabled, IPC is disabled 00:03:22.041 00:03:22.041 real 0m1.061s 00:03:22.041 user 0m0.635s 00:03:22.041 sys 0m0.399s 00:03:22.041 16:16:16 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.041 16:16:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:22.041 ************************************ 00:03:22.041 END TEST env_vtophys 00:03:22.041 ************************************ 00:03:22.041 16:16:16 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:22.041 16:16:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:22.042 16:16:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.042 16:16:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.042 ************************************ 00:03:22.042 START TEST env_pci 00:03:22.042 ************************************ 00:03:22.042 16:16:16 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:22.042 00:03:22.042 00:03:22.042 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.042 http://cunit.sourceforge.net/ 00:03:22.042 00:03:22.042 00:03:22.042 Suite: pci 00:03:22.042 Test: pci_hook ...[2024-12-06 16:16:16.712762] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3590041 has claimed it 00:03:22.042 EAL: Cannot find device (10000:00:01.0) 00:03:22.042 EAL: Failed to attach device on primary process 00:03:22.042 passed 00:03:22.042 00:03:22.042 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.042 suites 1 1 n/a 0 0 00:03:22.042 tests 1 1 1 0 0 00:03:22.042 asserts 25 25 25 0 n/a 00:03:22.042 00:03:22.042 Elapsed time = 0.029 seconds 00:03:22.042 00:03:22.042 real 0m0.049s 00:03:22.042 user 0m0.017s 00:03:22.042 sys 0m0.031s 00:03:22.042 16:16:16 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.042 16:16:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:22.042 ************************************ 00:03:22.042 END TEST env_pci 00:03:22.042 ************************************ 00:03:22.301 16:16:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:22.301 16:16:16 env -- env/env.sh@15 -- # uname 00:03:22.301 16:16:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:22.301 16:16:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:22.301 16:16:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.301 16:16:16 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:22.301 16:16:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.301 16:16:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.301 ************************************ 00:03:22.301 START TEST env_dpdk_post_init 00:03:22.301 ************************************ 00:03:22.301 16:16:16 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:22.301 EAL: Detected CPU lcores: 112 00:03:22.301 EAL: Detected NUMA nodes: 2 00:03:22.301 EAL: Detected shared linkage of DPDK 00:03:22.301 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.301 EAL: Selected IOVA mode 'VA' 00:03:22.301 EAL: VFIO support initialized 00:03:22.301 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.301 EAL: Using IOMMU type 1 (Type 1) 00:03:22.301 EAL: Ignore mapping IO port bar(1) 00:03:22.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:22.301 EAL: Ignore mapping IO port bar(1) 00:03:22.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:22.301 EAL: Ignore mapping IO port bar(1) 00:03:22.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:22.301 EAL: Ignore mapping IO port bar(1) 00:03:22.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:22.301 EAL: Ignore mapping IO port bar(1) 00:03:22.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:22.301 EAL: Ignore mapping IO port bar(1) 00:03:22.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:22.301 EAL: Ignore mapping IO port bar(1) 00:03:22.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:22.301 EAL: Ignore mapping IO port bar(1) 00:03:22.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:22.561 EAL: Ignore mapping IO port bar(1) 00:03:22.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:22.561 EAL: Ignore mapping IO port bar(1) 00:03:22.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:22.561 EAL: Ignore mapping IO port bar(1) 00:03:22.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:22.561 EAL: Ignore mapping IO port bar(1) 00:03:22.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:22.561 EAL: Ignore mapping IO port bar(1) 00:03:22.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:22.561 EAL: Ignore mapping IO port bar(1) 00:03:22.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:22.561 EAL: Ignore mapping IO port bar(1) 00:03:22.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:22.561 EAL: Ignore mapping IO port bar(1) 00:03:22.561 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:23.497 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:03:28.768 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:03:28.768 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:03:28.768 Starting DPDK initialization... 00:03:28.768 Starting SPDK post initialization... 00:03:28.768 SPDK NVMe probe 00:03:28.768 Attaching to 0000:d8:00.0 00:03:28.768 Attached to 0000:d8:00.0 00:03:28.768 Cleaning up... 00:03:28.768 00:03:28.768 real 0m6.666s 00:03:28.768 user 0m5.114s 00:03:28.768 sys 0m0.614s 00:03:28.768 16:16:23 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.768 16:16:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:28.768 ************************************ 00:03:28.768 END TEST env_dpdk_post_init 00:03:28.768 ************************************ 00:03:29.028 16:16:23 env -- env/env.sh@26 -- # uname 00:03:29.028 16:16:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:29.028 16:16:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:29.028 16:16:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.028 16:16:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.028 16:16:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.028 ************************************ 00:03:29.028 START TEST env_mem_callbacks 00:03:29.028 ************************************ 00:03:29.028 16:16:23 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:29.028 EAL: Detected CPU lcores: 112 00:03:29.028 EAL: Detected NUMA nodes: 2 00:03:29.028 EAL: Detected shared linkage of DPDK 00:03:29.028 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:29.028 EAL: Selected IOVA mode 'VA' 00:03:29.028 EAL: VFIO support initialized 00:03:29.028 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:29.028 00:03:29.028 00:03:29.028 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.028 http://cunit.sourceforge.net/ 00:03:29.028 00:03:29.028 00:03:29.028 Suite: memory 00:03:29.028 Test: test ... 00:03:29.028 register 0x200000200000 2097152 00:03:29.028 malloc 3145728 00:03:29.028 register 0x200000400000 4194304 00:03:29.028 buf 0x200000500000 len 3145728 PASSED 00:03:29.028 malloc 64 00:03:29.028 buf 0x2000004fff40 len 64 PASSED 00:03:29.028 malloc 4194304 00:03:29.028 register 0x200000800000 6291456 00:03:29.028 buf 0x200000a00000 len 4194304 PASSED 00:03:29.028 free 0x200000500000 3145728 00:03:29.028 free 0x2000004fff40 64 00:03:29.028 unregister 0x200000400000 4194304 PASSED 00:03:29.028 free 0x200000a00000 4194304 00:03:29.028 unregister 0x200000800000 6291456 PASSED 00:03:29.028 malloc 8388608 00:03:29.028 register 0x200000400000 10485760 00:03:29.028 buf 0x200000600000 len 8388608 PASSED 00:03:29.028 free 0x200000600000 8388608 00:03:29.028 unregister 0x200000400000 10485760 PASSED 00:03:29.028 passed 00:03:29.028 00:03:29.028 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.028 suites 1 1 n/a 0 0 00:03:29.028 tests 1 1 1 0 0 00:03:29.028 asserts 15 15 15 0 n/a 00:03:29.028 00:03:29.028 Elapsed time = 0.006 seconds 00:03:29.028 00:03:29.028 real 0m0.056s 00:03:29.028 user 0m0.016s 00:03:29.028 sys 0m0.040s 00:03:29.028 16:16:23 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.028 16:16:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:29.028 ************************************ 00:03:29.028 END TEST env_mem_callbacks 00:03:29.028 ************************************ 00:03:29.028 00:03:29.028 real 0m8.487s 00:03:29.028 user 0m6.143s 00:03:29.028 sys 0m1.411s 00:03:29.028 16:16:23 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.028 16:16:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:29.028 ************************************ 00:03:29.028 END TEST env 00:03:29.028 ************************************ 00:03:29.028 16:16:23 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:29.028 16:16:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.028 16:16:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.028 16:16:23 -- common/autotest_common.sh@10 -- # set +x 00:03:29.028 ************************************ 00:03:29.028 START TEST rpc 00:03:29.028 ************************************ 00:03:29.028 16:16:23 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:29.288 * Looking for test storage... 00:03:29.288 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:29.288 16:16:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.288 16:16:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.288 16:16:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.288 16:16:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.288 16:16:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.288 16:16:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.288 16:16:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.288 16:16:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.288 16:16:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.288 16:16:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.288 16:16:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.288 16:16:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:29.288 16:16:23 rpc -- scripts/common.sh@345 -- # : 1 00:03:29.288 16:16:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.288 16:16:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.288 16:16:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:29.288 16:16:23 rpc -- scripts/common.sh@353 -- # local d=1 00:03:29.288 16:16:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.288 16:16:23 rpc -- scripts/common.sh@355 -- # echo 1 00:03:29.288 16:16:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.288 16:16:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:29.288 16:16:23 rpc -- scripts/common.sh@353 -- # local d=2 00:03:29.288 16:16:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.288 16:16:23 rpc -- scripts/common.sh@355 -- # echo 2 00:03:29.288 16:16:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.288 16:16:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.288 16:16:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.288 16:16:23 rpc -- scripts/common.sh@368 -- # return 0 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.288 --rc genhtml_branch_coverage=1 00:03:29.288 --rc genhtml_function_coverage=1 00:03:29.288 --rc genhtml_legend=1 00:03:29.288 --rc geninfo_all_blocks=1 00:03:29.288 --rc geninfo_unexecuted_blocks=1 00:03:29.288 00:03:29.288 ' 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.288 --rc genhtml_branch_coverage=1 00:03:29.288 --rc genhtml_function_coverage=1 00:03:29.288 --rc genhtml_legend=1 00:03:29.288 --rc geninfo_all_blocks=1 00:03:29.288 --rc geninfo_unexecuted_blocks=1 00:03:29.288 00:03:29.288 ' 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.288 --rc genhtml_branch_coverage=1 00:03:29.288 --rc genhtml_function_coverage=1 00:03:29.288 --rc genhtml_legend=1 00:03:29.288 --rc geninfo_all_blocks=1 00:03:29.288 --rc geninfo_unexecuted_blocks=1 00:03:29.288 00:03:29.288 ' 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:29.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.288 --rc genhtml_branch_coverage=1 00:03:29.288 --rc genhtml_function_coverage=1 00:03:29.288 --rc genhtml_legend=1 00:03:29.288 --rc geninfo_all_blocks=1 00:03:29.288 --rc geninfo_unexecuted_blocks=1 00:03:29.288 00:03:29.288 ' 00:03:29.288 16:16:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3591488 00:03:29.288 16:16:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:29.288 16:16:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3591488 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@835 -- # '[' -z 3591488 ']' 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:29.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:29.288 16:16:23 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:29.288 16:16:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.288 [2024-12-06 16:16:23.942239] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:03:29.288 [2024-12-06 16:16:23.942286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591488 ] 00:03:29.288 [2024-12-06 16:16:23.998899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.547 [2024-12-06 16:16:24.037926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:29.547 [2024-12-06 16:16:24.037957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3591488' to capture a snapshot of events at runtime. 00:03:29.547 [2024-12-06 16:16:24.037964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:29.547 [2024-12-06 16:16:24.037970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:29.548 [2024-12-06 16:16:24.037975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3591488 for offline analysis/debug. 00:03:29.548 [2024-12-06 16:16:24.038451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.548 16:16:24 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:29.548 16:16:24 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:29.548 16:16:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:29.548 16:16:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:29.548 16:16:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:29.548 16:16:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:29.548 16:16:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.548 16:16:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.548 16:16:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.548 ************************************ 00:03:29.548 START TEST rpc_integrity 00:03:29.548 ************************************ 00:03:29.548 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:29.548 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:29.548 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.548 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.548 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.548 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:29.548 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:29.807 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:29.807 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:29.807 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.807 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.807 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.807 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:29.807 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:29.807 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.807 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.807 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.807 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:29.807 { 00:03:29.807 "name": "Malloc0", 00:03:29.807 "aliases": [ 00:03:29.807 "d24fc194-b942-488f-a31a-2521129f6538" 00:03:29.807 ], 00:03:29.807 "product_name": "Malloc disk", 00:03:29.807 "block_size": 512, 00:03:29.807 "num_blocks": 16384, 00:03:29.807 "uuid": "d24fc194-b942-488f-a31a-2521129f6538", 00:03:29.807 "assigned_rate_limits": { 00:03:29.807 "rw_ios_per_sec": 0, 00:03:29.807 "rw_mbytes_per_sec": 0, 00:03:29.807 "r_mbytes_per_sec": 0, 00:03:29.807 "w_mbytes_per_sec": 0 00:03:29.807 }, 00:03:29.807 "claimed": false, 00:03:29.807 "zoned": false, 00:03:29.807 "supported_io_types": { 00:03:29.807 "read": true, 00:03:29.807 "write": true, 00:03:29.807 "unmap": true, 00:03:29.807 "flush": true, 00:03:29.807 "reset": true, 00:03:29.807 "nvme_admin": false, 00:03:29.807 "nvme_io": false, 00:03:29.807 "nvme_io_md": false, 00:03:29.807 "write_zeroes": true, 00:03:29.807 "zcopy": true, 00:03:29.807 "get_zone_info": false, 00:03:29.807 "zone_management": false, 00:03:29.807 "zone_append": false, 00:03:29.807 "compare": false, 00:03:29.807 "compare_and_write": false, 00:03:29.807 "abort": true, 00:03:29.807 "seek_hole": false, 00:03:29.807 "seek_data": false, 00:03:29.807 "copy": true, 00:03:29.807 "nvme_iov_md": false 00:03:29.807 }, 00:03:29.807 "memory_domains": [ 00:03:29.807 { 00:03:29.808 "dma_device_id": "system", 00:03:29.808 "dma_device_type": 1 00:03:29.808 }, 00:03:29.808 { 00:03:29.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.808 "dma_device_type": 2 00:03:29.808 } 00:03:29.808 ], 00:03:29.808 "driver_specific": {} 00:03:29.808 } 00:03:29.808 ]' 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.808 [2024-12-06 16:16:24.360832] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:29.808 [2024-12-06 16:16:24.360858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:29.808 [2024-12-06 16:16:24.360869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x231fe60 00:03:29.808 [2024-12-06 16:16:24.360875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:29.808 [2024-12-06 16:16:24.361880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:29.808 [2024-12-06 16:16:24.361901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:29.808 Passthru0 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:29.808 { 00:03:29.808 "name": "Malloc0", 00:03:29.808 "aliases": [ 00:03:29.808 "d24fc194-b942-488f-a31a-2521129f6538" 00:03:29.808 ], 00:03:29.808 "product_name": "Malloc disk", 00:03:29.808 "block_size": 512, 00:03:29.808 "num_blocks": 16384, 00:03:29.808 "uuid": "d24fc194-b942-488f-a31a-2521129f6538", 00:03:29.808 "assigned_rate_limits": { 00:03:29.808 "rw_ios_per_sec": 0, 00:03:29.808 "rw_mbytes_per_sec": 0, 00:03:29.808 "r_mbytes_per_sec": 0, 00:03:29.808 "w_mbytes_per_sec": 0 00:03:29.808 }, 00:03:29.808 "claimed": true, 00:03:29.808 "claim_type": "exclusive_write", 00:03:29.808 "zoned": false, 00:03:29.808 "supported_io_types": { 00:03:29.808 "read": true, 00:03:29.808 "write": true, 00:03:29.808 "unmap": true, 00:03:29.808 "flush": true, 00:03:29.808 "reset": true, 00:03:29.808 "nvme_admin": false, 00:03:29.808 "nvme_io": false, 00:03:29.808 "nvme_io_md": false, 00:03:29.808 "write_zeroes": true, 00:03:29.808 "zcopy": true, 00:03:29.808 "get_zone_info": false, 00:03:29.808 "zone_management": false, 00:03:29.808 "zone_append": false, 00:03:29.808 "compare": false, 00:03:29.808 "compare_and_write": false, 00:03:29.808 "abort": true, 00:03:29.808 "seek_hole": false, 00:03:29.808 "seek_data": false, 00:03:29.808 "copy": true, 00:03:29.808 "nvme_iov_md": false 00:03:29.808 }, 00:03:29.808 "memory_domains": [ 00:03:29.808 { 00:03:29.808 "dma_device_id": "system", 00:03:29.808 "dma_device_type": 1 00:03:29.808 }, 00:03:29.808 { 00:03:29.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.808 "dma_device_type": 2 00:03:29.808 } 00:03:29.808 ], 00:03:29.808 "driver_specific": {} 00:03:29.808 }, 00:03:29.808 { 00:03:29.808 "name": "Passthru0", 00:03:29.808 "aliases": [ 00:03:29.808 "e50d8e34-9b91-5c51-98fc-2516ef1a2682" 00:03:29.808 ], 00:03:29.808 "product_name": "passthru", 00:03:29.808 "block_size": 512, 00:03:29.808 "num_blocks": 16384, 00:03:29.808 "uuid": "e50d8e34-9b91-5c51-98fc-2516ef1a2682", 00:03:29.808 "assigned_rate_limits": { 00:03:29.808 "rw_ios_per_sec": 0, 00:03:29.808 "rw_mbytes_per_sec": 0, 00:03:29.808 "r_mbytes_per_sec": 0, 00:03:29.808 "w_mbytes_per_sec": 0 00:03:29.808 }, 00:03:29.808 "claimed": false, 00:03:29.808 "zoned": false, 00:03:29.808 "supported_io_types": { 00:03:29.808 "read": true, 00:03:29.808 "write": true, 00:03:29.808 "unmap": true, 00:03:29.808 "flush": true, 00:03:29.808 "reset": true, 00:03:29.808 "nvme_admin": false, 00:03:29.808 "nvme_io": false, 00:03:29.808 "nvme_io_md": false, 00:03:29.808 "write_zeroes": true, 00:03:29.808 "zcopy": true, 00:03:29.808 "get_zone_info": false, 00:03:29.808 "zone_management": false, 00:03:29.808 "zone_append": false, 00:03:29.808 "compare": false, 00:03:29.808 "compare_and_write": false, 00:03:29.808 "abort": true, 00:03:29.808 "seek_hole": false, 00:03:29.808 "seek_data": false, 00:03:29.808 "copy": true, 00:03:29.808 "nvme_iov_md": false 00:03:29.808 }, 00:03:29.808 "memory_domains": [ 00:03:29.808 { 00:03:29.808 "dma_device_id": "system", 00:03:29.808 "dma_device_type": 1 00:03:29.808 }, 00:03:29.808 { 00:03:29.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.808 "dma_device_type": 2 00:03:29.808 } 00:03:29.808 ], 00:03:29.808 "driver_specific": { 00:03:29.808 "passthru": { 00:03:29.808 "name": "Passthru0", 00:03:29.808 "base_bdev_name": "Malloc0" 00:03:29.808 } 00:03:29.808 } 00:03:29.808 } 00:03:29.808 ]' 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:29.808 16:16:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:29.808 00:03:29.808 real 0m0.235s 00:03:29.808 user 0m0.154s 00:03:29.808 sys 0m0.019s 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:29.808 16:16:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:29.808 ************************************ 00:03:29.808 END TEST rpc_integrity 00:03:29.808 ************************************ 00:03:29.808 16:16:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:29.808 16:16:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:29.808 16:16:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.808 16:16:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.068 ************************************ 00:03:30.068 START TEST rpc_plugins 00:03:30.068 ************************************ 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:30.068 { 00:03:30.068 "name": "Malloc1", 00:03:30.068 "aliases": [ 00:03:30.068 "949b8a1b-7d9f-468c-a2d7-65e813bed2f9" 00:03:30.068 ], 00:03:30.068 "product_name": "Malloc disk", 00:03:30.068 "block_size": 4096, 00:03:30.068 "num_blocks": 256, 00:03:30.068 "uuid": "949b8a1b-7d9f-468c-a2d7-65e813bed2f9", 00:03:30.068 "assigned_rate_limits": { 00:03:30.068 "rw_ios_per_sec": 0, 00:03:30.068 "rw_mbytes_per_sec": 0, 00:03:30.068 "r_mbytes_per_sec": 0, 00:03:30.068 "w_mbytes_per_sec": 0 00:03:30.068 }, 00:03:30.068 "claimed": false, 00:03:30.068 "zoned": false, 00:03:30.068 "supported_io_types": { 00:03:30.068 "read": true, 00:03:30.068 "write": true, 00:03:30.068 "unmap": true, 00:03:30.068 "flush": true, 00:03:30.068 "reset": true, 00:03:30.068 "nvme_admin": false, 00:03:30.068 "nvme_io": false, 00:03:30.068 "nvme_io_md": false, 00:03:30.068 "write_zeroes": true, 00:03:30.068 "zcopy": true, 00:03:30.068 "get_zone_info": false, 00:03:30.068 "zone_management": false, 00:03:30.068 "zone_append": false, 00:03:30.068 "compare": false, 00:03:30.068 "compare_and_write": false, 00:03:30.068 "abort": true, 00:03:30.068 "seek_hole": false, 00:03:30.068 "seek_data": false, 00:03:30.068 "copy": true, 00:03:30.068 "nvme_iov_md": false 00:03:30.068 }, 00:03:30.068 "memory_domains": [ 00:03:30.068 { 00:03:30.068 "dma_device_id": "system", 00:03:30.068 "dma_device_type": 1 00:03:30.068 }, 00:03:30.068 { 00:03:30.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.068 "dma_device_type": 2 00:03:30.068 } 00:03:30.068 ], 00:03:30.068 "driver_specific": {} 00:03:30.068 } 00:03:30.068 ]' 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:30.068 16:16:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:30.068 00:03:30.068 real 0m0.108s 00:03:30.068 user 0m0.060s 00:03:30.068 sys 0m0.012s 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.068 16:16:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:30.068 ************************************ 00:03:30.068 END TEST rpc_plugins 00:03:30.068 ************************************ 00:03:30.068 16:16:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:30.068 16:16:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.068 16:16:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.068 16:16:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.068 ************************************ 00:03:30.068 START TEST rpc_trace_cmd_test 00:03:30.068 ************************************ 00:03:30.068 16:16:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:30.068 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:30.068 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:30.068 16:16:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.068 16:16:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:30.068 16:16:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.068 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:30.068 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3591488", 00:03:30.068 "tpoint_group_mask": "0x8", 00:03:30.068 "iscsi_conn": { 00:03:30.068 "mask": "0x2", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "scsi": { 00:03:30.068 "mask": "0x4", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "bdev": { 00:03:30.068 "mask": "0x8", 00:03:30.068 "tpoint_mask": "0xffffffffffffffff" 00:03:30.068 }, 00:03:30.068 "nvmf_rdma": { 00:03:30.068 "mask": "0x10", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "nvmf_tcp": { 00:03:30.068 "mask": "0x20", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "ftl": { 00:03:30.068 "mask": "0x40", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "blobfs": { 00:03:30.068 "mask": "0x80", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "dsa": { 00:03:30.068 "mask": "0x200", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "thread": { 00:03:30.068 "mask": "0x400", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "nvme_pcie": { 00:03:30.068 "mask": "0x800", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "iaa": { 00:03:30.068 "mask": "0x1000", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "nvme_tcp": { 00:03:30.068 "mask": "0x2000", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "bdev_nvme": { 00:03:30.068 "mask": "0x4000", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "sock": { 00:03:30.068 "mask": "0x8000", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "blob": { 00:03:30.068 "mask": "0x10000", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "bdev_raid": { 00:03:30.068 "mask": "0x20000", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 }, 00:03:30.068 "scheduler": { 00:03:30.068 "mask": "0x40000", 00:03:30.068 "tpoint_mask": "0x0" 00:03:30.068 } 00:03:30.068 }' 00:03:30.068 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:30.068 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:30.068 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:30.328 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:30.328 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:30.328 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:30.328 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:30.328 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:30.328 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:30.328 16:16:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:30.328 00:03:30.328 real 0m0.171s 00:03:30.328 user 0m0.141s 00:03:30.328 sys 0m0.023s 00:03:30.328 16:16:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.328 16:16:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:30.328 ************************************ 00:03:30.328 END TEST rpc_trace_cmd_test 00:03:30.328 ************************************ 00:03:30.328 16:16:24 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:30.328 16:16:24 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:30.328 16:16:24 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:30.328 16:16:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:30.328 16:16:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.328 16:16:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.328 ************************************ 00:03:30.328 START TEST rpc_daemon_integrity 00:03:30.328 ************************************ 00:03:30.328 16:16:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:30.328 16:16:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:30.328 16:16:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.328 16:16:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.328 16:16:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.328 16:16:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:30.328 16:16:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:30.328 { 00:03:30.328 "name": "Malloc2", 00:03:30.328 "aliases": [ 00:03:30.328 "99529eb8-2ac7-40b1-ac0b-74ba395edd40" 00:03:30.328 ], 00:03:30.328 "product_name": "Malloc disk", 00:03:30.328 "block_size": 512, 00:03:30.328 "num_blocks": 16384, 00:03:30.328 "uuid": "99529eb8-2ac7-40b1-ac0b-74ba395edd40", 00:03:30.328 "assigned_rate_limits": { 00:03:30.328 "rw_ios_per_sec": 0, 00:03:30.328 "rw_mbytes_per_sec": 0, 00:03:30.328 "r_mbytes_per_sec": 0, 00:03:30.328 "w_mbytes_per_sec": 0 00:03:30.328 }, 00:03:30.328 "claimed": false, 00:03:30.328 "zoned": false, 00:03:30.328 "supported_io_types": { 00:03:30.328 "read": true, 00:03:30.328 "write": true, 00:03:30.328 "unmap": true, 00:03:30.328 "flush": true, 00:03:30.328 "reset": true, 00:03:30.328 "nvme_admin": false, 00:03:30.328 "nvme_io": false, 00:03:30.328 "nvme_io_md": false, 00:03:30.328 "write_zeroes": true, 00:03:30.328 "zcopy": true, 00:03:30.328 "get_zone_info": false, 00:03:30.328 "zone_management": false, 00:03:30.328 "zone_append": false, 00:03:30.328 "compare": false, 00:03:30.328 "compare_and_write": false, 00:03:30.328 "abort": true, 00:03:30.328 "seek_hole": false, 00:03:30.328 "seek_data": false, 00:03:30.328 "copy": true, 00:03:30.328 "nvme_iov_md": false 00:03:30.328 }, 00:03:30.328 "memory_domains": [ 00:03:30.328 { 00:03:30.328 "dma_device_id": "system", 00:03:30.328 "dma_device_type": 1 00:03:30.328 }, 00:03:30.328 { 00:03:30.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.328 "dma_device_type": 2 00:03:30.328 } 00:03:30.328 ], 00:03:30.328 "driver_specific": {} 00:03:30.328 } 00:03:30.328 ]' 00:03:30.328 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.588 [2024-12-06 16:16:25.074709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:30.588 [2024-12-06 16:16:25.074735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:30.588 [2024-12-06 16:16:25.074748] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23202b0 00:03:30.588 [2024-12-06 16:16:25.074754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:30.588 [2024-12-06 16:16:25.075661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:30.588 [2024-12-06 16:16:25.075681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:30.588 Passthru0 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:30.588 { 00:03:30.588 "name": "Malloc2", 00:03:30.588 "aliases": [ 00:03:30.588 "99529eb8-2ac7-40b1-ac0b-74ba395edd40" 00:03:30.588 ], 00:03:30.588 "product_name": "Malloc disk", 00:03:30.588 "block_size": 512, 00:03:30.588 "num_blocks": 16384, 00:03:30.588 "uuid": "99529eb8-2ac7-40b1-ac0b-74ba395edd40", 00:03:30.588 "assigned_rate_limits": { 00:03:30.588 "rw_ios_per_sec": 0, 00:03:30.588 "rw_mbytes_per_sec": 0, 00:03:30.588 "r_mbytes_per_sec": 0, 00:03:30.588 "w_mbytes_per_sec": 0 00:03:30.588 }, 00:03:30.588 "claimed": true, 00:03:30.588 "claim_type": "exclusive_write", 00:03:30.588 "zoned": false, 00:03:30.588 "supported_io_types": { 00:03:30.588 "read": true, 00:03:30.588 "write": true, 00:03:30.588 "unmap": true, 00:03:30.588 "flush": true, 00:03:30.588 "reset": true, 00:03:30.588 "nvme_admin": false, 00:03:30.588 "nvme_io": false, 00:03:30.588 "nvme_io_md": false, 00:03:30.588 "write_zeroes": true, 00:03:30.588 "zcopy": true, 00:03:30.588 "get_zone_info": false, 00:03:30.588 "zone_management": false, 00:03:30.588 "zone_append": false, 00:03:30.588 "compare": false, 00:03:30.588 "compare_and_write": false, 00:03:30.588 "abort": true, 00:03:30.588 "seek_hole": false, 00:03:30.588 "seek_data": false, 00:03:30.588 "copy": true, 00:03:30.588 "nvme_iov_md": false 00:03:30.588 }, 00:03:30.588 "memory_domains": [ 00:03:30.588 { 00:03:30.588 "dma_device_id": "system", 00:03:30.588 "dma_device_type": 1 00:03:30.588 }, 00:03:30.588 { 00:03:30.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.588 "dma_device_type": 2 00:03:30.588 } 00:03:30.588 ], 00:03:30.588 "driver_specific": {} 00:03:30.588 }, 00:03:30.588 { 00:03:30.588 "name": "Passthru0", 00:03:30.588 "aliases": [ 00:03:30.588 "218b4c12-8ff2-5858-aa83-91878caec935" 00:03:30.588 ], 00:03:30.588 "product_name": "passthru", 00:03:30.588 "block_size": 512, 00:03:30.588 "num_blocks": 16384, 00:03:30.588 "uuid": "218b4c12-8ff2-5858-aa83-91878caec935", 00:03:30.588 "assigned_rate_limits": { 00:03:30.588 "rw_ios_per_sec": 0, 00:03:30.588 "rw_mbytes_per_sec": 0, 00:03:30.588 "r_mbytes_per_sec": 0, 00:03:30.588 "w_mbytes_per_sec": 0 00:03:30.588 }, 00:03:30.588 "claimed": false, 00:03:30.588 "zoned": false, 00:03:30.588 "supported_io_types": { 00:03:30.588 "read": true, 00:03:30.588 "write": true, 00:03:30.588 "unmap": true, 00:03:30.588 "flush": true, 00:03:30.588 "reset": true, 00:03:30.588 "nvme_admin": false, 00:03:30.588 "nvme_io": false, 00:03:30.588 "nvme_io_md": false, 00:03:30.588 "write_zeroes": true, 00:03:30.588 "zcopy": true, 00:03:30.588 "get_zone_info": false, 00:03:30.588 "zone_management": false, 00:03:30.588 "zone_append": false, 00:03:30.588 "compare": false, 00:03:30.588 "compare_and_write": false, 00:03:30.588 "abort": true, 00:03:30.588 "seek_hole": false, 00:03:30.588 "seek_data": false, 00:03:30.588 "copy": true, 00:03:30.588 "nvme_iov_md": false 00:03:30.588 }, 00:03:30.588 "memory_domains": [ 00:03:30.588 { 00:03:30.588 "dma_device_id": "system", 00:03:30.588 "dma_device_type": 1 00:03:30.588 }, 00:03:30.588 { 00:03:30.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.588 "dma_device_type": 2 00:03:30.588 } 00:03:30.588 ], 00:03:30.588 "driver_specific": { 00:03:30.588 "passthru": { 00:03:30.588 "name": "Passthru0", 00:03:30.588 "base_bdev_name": "Malloc2" 00:03:30.588 } 00:03:30.588 } 00:03:30.588 } 00:03:30.588 ]' 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:30.588 00:03:30.588 real 0m0.245s 00:03:30.588 user 0m0.158s 00:03:30.588 sys 0m0.025s 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.588 16:16:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.588 ************************************ 00:03:30.588 END TEST rpc_daemon_integrity 00:03:30.588 ************************************ 00:03:30.588 16:16:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:30.588 16:16:25 rpc -- rpc/rpc.sh@84 -- # killprocess 3591488 00:03:30.588 16:16:25 rpc -- common/autotest_common.sh@954 -- # '[' -z 3591488 ']' 00:03:30.588 16:16:25 rpc -- common/autotest_common.sh@958 -- # kill -0 3591488 00:03:30.588 16:16:25 rpc -- common/autotest_common.sh@959 -- # uname 00:03:30.588 16:16:25 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:30.588 16:16:25 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3591488 00:03:30.588 16:16:25 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:30.588 16:16:25 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:30.588 16:16:25 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3591488' 00:03:30.588 killing process with pid 3591488 00:03:30.588 16:16:25 rpc -- common/autotest_common.sh@973 -- # kill 3591488 00:03:30.588 16:16:25 rpc -- common/autotest_common.sh@978 -- # wait 3591488 00:03:30.846 00:03:30.846 real 0m1.845s 00:03:30.846 user 0m2.310s 00:03:30.846 sys 0m0.601s 00:03:30.846 16:16:25 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.846 16:16:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.846 ************************************ 00:03:30.846 END TEST rpc 00:03:30.846 ************************************ 00:03:31.170 16:16:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:31.170 16:16:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.170 16:16:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.170 16:16:25 -- common/autotest_common.sh@10 -- # set +x 00:03:31.170 ************************************ 00:03:31.170 START TEST skip_rpc 00:03:31.170 ************************************ 00:03:31.170 16:16:25 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:31.170 * Looking for test storage... 00:03:31.170 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:31.170 16:16:25 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:31.170 16:16:25 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:31.170 16:16:25 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:31.170 16:16:25 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.170 16:16:25 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:31.170 16:16:25 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.170 16:16:25 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:31.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.170 --rc genhtml_branch_coverage=1 00:03:31.170 --rc genhtml_function_coverage=1 00:03:31.170 --rc genhtml_legend=1 00:03:31.170 --rc geninfo_all_blocks=1 00:03:31.170 --rc geninfo_unexecuted_blocks=1 00:03:31.170 00:03:31.170 ' 00:03:31.170 16:16:25 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:31.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.170 --rc genhtml_branch_coverage=1 00:03:31.170 --rc genhtml_function_coverage=1 00:03:31.170 --rc genhtml_legend=1 00:03:31.170 --rc geninfo_all_blocks=1 00:03:31.171 --rc geninfo_unexecuted_blocks=1 00:03:31.171 00:03:31.171 ' 00:03:31.171 16:16:25 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:31.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.171 --rc genhtml_branch_coverage=1 00:03:31.171 --rc genhtml_function_coverage=1 00:03:31.171 --rc genhtml_legend=1 00:03:31.171 --rc geninfo_all_blocks=1 00:03:31.171 --rc geninfo_unexecuted_blocks=1 00:03:31.171 00:03:31.171 ' 00:03:31.171 16:16:25 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:31.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.171 --rc genhtml_branch_coverage=1 00:03:31.171 --rc genhtml_function_coverage=1 00:03:31.171 --rc genhtml_legend=1 00:03:31.171 --rc geninfo_all_blocks=1 00:03:31.171 --rc geninfo_unexecuted_blocks=1 00:03:31.171 00:03:31.171 ' 00:03:31.171 16:16:25 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:31.171 16:16:25 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:31.171 16:16:25 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:31.171 16:16:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.171 16:16:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.171 16:16:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.171 ************************************ 00:03:31.171 START TEST skip_rpc 00:03:31.171 ************************************ 00:03:31.171 16:16:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:31.171 16:16:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3591940 00:03:31.171 16:16:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:31.171 16:16:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:31.171 16:16:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:31.443 [2024-12-06 16:16:25.891114] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:03:31.443 [2024-12-06 16:16:25.891153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591940 ] 00:03:31.443 [2024-12-06 16:16:25.949159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.443 [2024-12-06 16:16:25.986369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3591940 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3591940 ']' 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3591940 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3591940 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3591940' 00:03:36.730 killing process with pid 3591940 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3591940 00:03:36.730 16:16:30 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3591940 00:03:36.730 00:03:36.730 real 0m5.365s 00:03:36.730 user 0m5.126s 00:03:36.730 sys 0m0.273s 00:03:36.730 16:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.730 16:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.730 ************************************ 00:03:36.730 END TEST skip_rpc 00:03:36.730 ************************************ 00:03:36.730 16:16:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:36.730 16:16:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.730 16:16:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.730 16:16:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.730 ************************************ 00:03:36.730 START TEST skip_rpc_with_json 00:03:36.730 ************************************ 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3593013 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3593013 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3593013 ']' 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:36.730 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.730 [2024-12-06 16:16:31.327610] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:03:36.730 [2024-12-06 16:16:31.327653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593013 ] 00:03:36.730 [2024-12-06 16:16:31.386631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.730 [2024-12-06 16:16:31.423395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.990 [2024-12-06 16:16:31.629099] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:36.990 request: 00:03:36.990 { 00:03:36.990 "trtype": "tcp", 00:03:36.990 "method": "nvmf_get_transports", 00:03:36.990 "req_id": 1 00:03:36.990 } 00:03:36.990 Got JSON-RPC error response 00:03:36.990 response: 00:03:36.990 { 00:03:36.990 "code": -19, 00:03:36.990 "message": "No such device" 00:03:36.990 } 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:36.990 [2024-12-06 16:16:31.637190] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:36.990 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:37.250 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:37.250 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:37.250 { 00:03:37.250 "subsystems": [ 00:03:37.250 { 00:03:37.250 "subsystem": "fsdev", 00:03:37.250 "config": [ 00:03:37.250 { 00:03:37.250 "method": "fsdev_set_opts", 00:03:37.250 "params": { 00:03:37.250 "fsdev_io_pool_size": 65535, 00:03:37.250 "fsdev_io_cache_size": 256 00:03:37.250 } 00:03:37.250 } 00:03:37.250 ] 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "subsystem": "keyring", 00:03:37.250 "config": [] 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "subsystem": "iobuf", 00:03:37.250 "config": [ 00:03:37.250 { 00:03:37.250 "method": "iobuf_set_options", 00:03:37.250 "params": { 00:03:37.250 "small_pool_count": 8192, 00:03:37.250 "large_pool_count": 1024, 00:03:37.250 "small_bufsize": 8192, 00:03:37.250 "large_bufsize": 135168, 00:03:37.250 "enable_numa": false 00:03:37.250 } 00:03:37.250 } 00:03:37.250 ] 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "subsystem": "sock", 00:03:37.250 "config": [ 00:03:37.250 { 00:03:37.250 "method": "sock_set_default_impl", 00:03:37.250 "params": { 00:03:37.250 "impl_name": "posix" 00:03:37.250 } 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "method": "sock_impl_set_options", 00:03:37.250 "params": { 00:03:37.250 "impl_name": "ssl", 00:03:37.250 "recv_buf_size": 4096, 00:03:37.250 "send_buf_size": 4096, 00:03:37.250 "enable_recv_pipe": true, 00:03:37.250 "enable_quickack": false, 00:03:37.250 "enable_placement_id": 0, 00:03:37.250 "enable_zerocopy_send_server": true, 00:03:37.250 "enable_zerocopy_send_client": false, 00:03:37.250 "zerocopy_threshold": 0, 00:03:37.250 "tls_version": 0, 00:03:37.250 "enable_ktls": false 00:03:37.250 } 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "method": "sock_impl_set_options", 00:03:37.250 "params": { 00:03:37.250 "impl_name": "posix", 00:03:37.250 "recv_buf_size": 2097152, 00:03:37.250 "send_buf_size": 2097152, 00:03:37.250 "enable_recv_pipe": true, 00:03:37.250 "enable_quickack": false, 00:03:37.250 "enable_placement_id": 0, 00:03:37.250 "enable_zerocopy_send_server": true, 00:03:37.250 "enable_zerocopy_send_client": false, 00:03:37.250 "zerocopy_threshold": 0, 00:03:37.250 "tls_version": 0, 00:03:37.250 "enable_ktls": false 00:03:37.250 } 00:03:37.250 } 00:03:37.250 ] 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "subsystem": "vmd", 00:03:37.250 "config": [] 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "subsystem": "accel", 00:03:37.250 "config": [ 00:03:37.250 { 00:03:37.250 "method": "accel_set_options", 00:03:37.250 "params": { 00:03:37.250 "small_cache_size": 128, 00:03:37.250 "large_cache_size": 16, 00:03:37.250 "task_count": 2048, 00:03:37.250 "sequence_count": 2048, 00:03:37.250 "buf_count": 2048 00:03:37.250 } 00:03:37.250 } 00:03:37.250 ] 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "subsystem": "bdev", 00:03:37.250 "config": [ 00:03:37.250 { 00:03:37.250 "method": "bdev_set_options", 00:03:37.250 "params": { 00:03:37.250 "bdev_io_pool_size": 65535, 00:03:37.250 "bdev_io_cache_size": 256, 00:03:37.250 "bdev_auto_examine": true, 00:03:37.250 "iobuf_small_cache_size": 128, 00:03:37.250 "iobuf_large_cache_size": 16 00:03:37.250 } 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "method": "bdev_raid_set_options", 00:03:37.250 "params": { 00:03:37.250 "process_window_size_kb": 1024, 00:03:37.250 "process_max_bandwidth_mb_sec": 0 00:03:37.250 } 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "method": "bdev_iscsi_set_options", 00:03:37.250 "params": { 00:03:37.250 "timeout_sec": 30 00:03:37.250 } 00:03:37.250 }, 00:03:37.250 { 00:03:37.250 "method": "bdev_nvme_set_options", 00:03:37.250 "params": { 00:03:37.250 "action_on_timeout": "none", 00:03:37.250 "timeout_us": 0, 00:03:37.250 "timeout_admin_us": 0, 00:03:37.250 "keep_alive_timeout_ms": 10000, 00:03:37.250 "arbitration_burst": 0, 00:03:37.250 "low_priority_weight": 0, 00:03:37.250 "medium_priority_weight": 0, 00:03:37.250 "high_priority_weight": 0, 00:03:37.250 "nvme_adminq_poll_period_us": 10000, 00:03:37.250 "nvme_ioq_poll_period_us": 0, 00:03:37.250 "io_queue_requests": 0, 00:03:37.250 "delay_cmd_submit": true, 00:03:37.250 "transport_retry_count": 4, 00:03:37.250 "bdev_retry_count": 3, 00:03:37.250 "transport_ack_timeout": 0, 00:03:37.250 "ctrlr_loss_timeout_sec": 0, 00:03:37.250 "reconnect_delay_sec": 0, 00:03:37.250 "fast_io_fail_timeout_sec": 0, 00:03:37.250 "disable_auto_failback": false, 00:03:37.250 "generate_uuids": false, 00:03:37.250 "transport_tos": 0, 00:03:37.250 "nvme_error_stat": false, 00:03:37.250 "rdma_srq_size": 0, 00:03:37.250 "io_path_stat": false, 00:03:37.250 "allow_accel_sequence": false, 00:03:37.250 "rdma_max_cq_size": 0, 00:03:37.250 "rdma_cm_event_timeout_ms": 0, 00:03:37.250 "dhchap_digests": [ 00:03:37.250 "sha256", 00:03:37.250 "sha384", 00:03:37.250 "sha512" 00:03:37.250 ], 00:03:37.250 "dhchap_dhgroups": [ 00:03:37.250 "null", 00:03:37.250 "ffdhe2048", 00:03:37.250 "ffdhe3072", 00:03:37.250 "ffdhe4096", 00:03:37.250 "ffdhe6144", 00:03:37.250 "ffdhe8192" 00:03:37.250 ] 00:03:37.250 } 00:03:37.250 }, 00:03:37.250 { 00:03:37.251 "method": "bdev_nvme_set_hotplug", 00:03:37.251 "params": { 00:03:37.251 "period_us": 100000, 00:03:37.251 "enable": false 00:03:37.251 } 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "method": "bdev_wait_for_examine" 00:03:37.251 } 00:03:37.251 ] 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "subsystem": "scsi", 00:03:37.251 "config": null 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "subsystem": "scheduler", 00:03:37.251 "config": [ 00:03:37.251 { 00:03:37.251 "method": "framework_set_scheduler", 00:03:37.251 "params": { 00:03:37.251 "name": "static" 00:03:37.251 } 00:03:37.251 } 00:03:37.251 ] 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "subsystem": "vhost_scsi", 00:03:37.251 "config": [] 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "subsystem": "vhost_blk", 00:03:37.251 "config": [] 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "subsystem": "ublk", 00:03:37.251 "config": [] 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "subsystem": "nbd", 00:03:37.251 "config": [] 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "subsystem": "nvmf", 00:03:37.251 "config": [ 00:03:37.251 { 00:03:37.251 "method": "nvmf_set_config", 00:03:37.251 "params": { 00:03:37.251 "discovery_filter": "match_any", 00:03:37.251 "admin_cmd_passthru": { 00:03:37.251 "identify_ctrlr": false 00:03:37.251 }, 00:03:37.251 "dhchap_digests": [ 00:03:37.251 "sha256", 00:03:37.251 "sha384", 00:03:37.251 "sha512" 00:03:37.251 ], 00:03:37.251 "dhchap_dhgroups": [ 00:03:37.251 "null", 00:03:37.251 "ffdhe2048", 00:03:37.251 "ffdhe3072", 00:03:37.251 "ffdhe4096", 00:03:37.251 "ffdhe6144", 00:03:37.251 "ffdhe8192" 00:03:37.251 ] 00:03:37.251 } 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "method": "nvmf_set_max_subsystems", 00:03:37.251 "params": { 00:03:37.251 "max_subsystems": 1024 00:03:37.251 } 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "method": "nvmf_set_crdt", 00:03:37.251 "params": { 00:03:37.251 "crdt1": 0, 00:03:37.251 "crdt2": 0, 00:03:37.251 "crdt3": 0 00:03:37.251 } 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "method": "nvmf_create_transport", 00:03:37.251 "params": { 00:03:37.251 "trtype": "TCP", 00:03:37.251 "max_queue_depth": 128, 00:03:37.251 "max_io_qpairs_per_ctrlr": 127, 00:03:37.251 "in_capsule_data_size": 4096, 00:03:37.251 "max_io_size": 131072, 00:03:37.251 "io_unit_size": 131072, 00:03:37.251 "max_aq_depth": 128, 00:03:37.251 "num_shared_buffers": 511, 00:03:37.251 "buf_cache_size": 4294967295, 00:03:37.251 "dif_insert_or_strip": false, 00:03:37.251 "zcopy": false, 00:03:37.251 "c2h_success": true, 00:03:37.251 "sock_priority": 0, 00:03:37.251 "abort_timeout_sec": 1, 00:03:37.251 "ack_timeout": 0, 00:03:37.251 "data_wr_pool_size": 0 00:03:37.251 } 00:03:37.251 } 00:03:37.251 ] 00:03:37.251 }, 00:03:37.251 { 00:03:37.251 "subsystem": "iscsi", 00:03:37.251 "config": [ 00:03:37.251 { 00:03:37.251 "method": "iscsi_set_options", 00:03:37.251 "params": { 00:03:37.251 "node_base": "iqn.2016-06.io.spdk", 00:03:37.251 "max_sessions": 128, 00:03:37.251 "max_connections_per_session": 2, 00:03:37.251 "max_queue_depth": 64, 00:03:37.251 "default_time2wait": 2, 00:03:37.251 "default_time2retain": 20, 00:03:37.251 "first_burst_length": 8192, 00:03:37.251 "immediate_data": true, 00:03:37.251 "allow_duplicated_isid": false, 00:03:37.251 "error_recovery_level": 0, 00:03:37.251 "nop_timeout": 60, 00:03:37.251 "nop_in_interval": 30, 00:03:37.251 "disable_chap": false, 00:03:37.251 "require_chap": false, 00:03:37.251 "mutual_chap": false, 00:03:37.251 "chap_group": 0, 00:03:37.251 "max_large_datain_per_connection": 64, 00:03:37.251 "max_r2t_per_connection": 4, 00:03:37.251 "pdu_pool_size": 36864, 00:03:37.251 "immediate_data_pool_size": 16384, 00:03:37.251 "data_out_pool_size": 2048 00:03:37.251 } 00:03:37.251 } 00:03:37.251 ] 00:03:37.251 } 00:03:37.251 ] 00:03:37.251 } 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3593013 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3593013 ']' 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3593013 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3593013 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3593013' 00:03:37.251 killing process with pid 3593013 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3593013 00:03:37.251 16:16:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3593013 00:03:37.510 16:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3593279 00:03:37.510 16:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:37.510 16:16:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3593279 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3593279 ']' 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3593279 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3593279 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3593279' 00:03:42.785 killing process with pid 3593279 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3593279 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3593279 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:42.785 00:03:42.785 real 0m6.206s 00:03:42.785 user 0m5.894s 00:03:42.785 sys 0m0.566s 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.785 16:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.785 ************************************ 00:03:42.785 END TEST skip_rpc_with_json 00:03:42.785 ************************************ 00:03:43.044 16:16:37 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:43.044 16:16:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.044 16:16:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.044 16:16:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.044 ************************************ 00:03:43.044 START TEST skip_rpc_with_delay 00:03:43.044 ************************************ 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:43.044 [2024-12-06 16:16:37.588369] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:43.044 00:03:43.044 real 0m0.049s 00:03:43.044 user 0m0.028s 00:03:43.044 sys 0m0.020s 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.044 16:16:37 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:43.044 ************************************ 00:03:43.044 END TEST skip_rpc_with_delay 00:03:43.044 ************************************ 00:03:43.044 16:16:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:43.044 16:16:37 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:43.044 16:16:37 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:43.044 16:16:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.045 16:16:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.045 16:16:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.045 ************************************ 00:03:43.045 START TEST exit_on_failed_rpc_init 00:03:43.045 ************************************ 00:03:43.045 16:16:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:43.045 16:16:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3594378 00:03:43.045 16:16:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3594378 00:03:43.045 16:16:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3594378 ']' 00:03:43.045 16:16:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:43.045 16:16:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:43.045 16:16:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:43.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:43.045 16:16:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:43.045 16:16:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:43.045 16:16:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:43.045 [2024-12-06 16:16:37.711911] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:03:43.045 [2024-12-06 16:16:37.711950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594378 ] 00:03:43.045 [2024-12-06 16:16:37.769562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.304 [2024-12-06 16:16:37.808916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:43.304 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.563 [2024-12-06 16:16:38.067359] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:03:43.563 [2024-12-06 16:16:38.067408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594393 ] 00:03:43.563 [2024-12-06 16:16:38.123731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.563 [2024-12-06 16:16:38.160744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:43.563 [2024-12-06 16:16:38.160794] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:43.563 [2024-12-06 16:16:38.160803] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:43.563 [2024-12-06 16:16:38.160809] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3594378 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3594378 ']' 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3594378 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3594378 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3594378' 00:03:43.564 killing process with pid 3594378 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3594378 00:03:43.564 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3594378 00:03:43.822 00:03:43.822 real 0m0.877s 00:03:43.822 user 0m0.913s 00:03:43.822 sys 0m0.354s 00:03:43.822 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.822 16:16:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:43.822 ************************************ 00:03:43.822 END TEST exit_on_failed_rpc_init 00:03:43.822 ************************************ 00:03:44.082 16:16:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:44.082 00:03:44.082 real 0m12.933s 00:03:44.082 user 0m12.149s 00:03:44.082 sys 0m1.491s 00:03:44.082 16:16:38 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.082 16:16:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.082 ************************************ 00:03:44.082 END TEST skip_rpc 00:03:44.082 ************************************ 00:03:44.082 16:16:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:44.082 16:16:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.082 16:16:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.082 16:16:38 -- common/autotest_common.sh@10 -- # set +x 00:03:44.082 ************************************ 00:03:44.082 START TEST rpc_client 00:03:44.082 ************************************ 00:03:44.082 16:16:38 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:44.082 * Looking for test storage... 00:03:44.082 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:03:44.082 16:16:38 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:44.082 16:16:38 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:03:44.082 16:16:38 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:44.082 16:16:38 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.082 16:16:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:44.082 16:16:38 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.082 16:16:38 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:44.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.082 --rc genhtml_branch_coverage=1 00:03:44.082 --rc genhtml_function_coverage=1 00:03:44.082 --rc genhtml_legend=1 00:03:44.082 --rc geninfo_all_blocks=1 00:03:44.082 --rc geninfo_unexecuted_blocks=1 00:03:44.082 00:03:44.082 ' 00:03:44.082 16:16:38 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:44.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.082 --rc genhtml_branch_coverage=1 00:03:44.082 --rc genhtml_function_coverage=1 00:03:44.082 --rc genhtml_legend=1 00:03:44.082 --rc geninfo_all_blocks=1 00:03:44.082 --rc geninfo_unexecuted_blocks=1 00:03:44.082 00:03:44.082 ' 00:03:44.082 16:16:38 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:44.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.082 --rc genhtml_branch_coverage=1 00:03:44.082 --rc genhtml_function_coverage=1 00:03:44.082 --rc genhtml_legend=1 00:03:44.082 --rc geninfo_all_blocks=1 00:03:44.082 --rc geninfo_unexecuted_blocks=1 00:03:44.082 00:03:44.082 ' 00:03:44.082 16:16:38 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:44.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.082 --rc genhtml_branch_coverage=1 00:03:44.082 --rc genhtml_function_coverage=1 00:03:44.082 --rc genhtml_legend=1 00:03:44.082 --rc geninfo_all_blocks=1 00:03:44.082 --rc geninfo_unexecuted_blocks=1 00:03:44.082 00:03:44.082 ' 00:03:44.082 16:16:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:44.343 OK 00:03:44.343 16:16:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:44.343 00:03:44.343 real 0m0.172s 00:03:44.343 user 0m0.105s 00:03:44.343 sys 0m0.076s 00:03:44.343 16:16:38 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.343 16:16:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:44.343 ************************************ 00:03:44.343 END TEST rpc_client 00:03:44.343 ************************************ 00:03:44.343 16:16:38 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:03:44.343 16:16:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.343 16:16:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.343 16:16:38 -- common/autotest_common.sh@10 -- # set +x 00:03:44.343 ************************************ 00:03:44.343 START TEST json_config 00:03:44.343 ************************************ 00:03:44.343 16:16:38 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:03:44.343 16:16:38 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:44.343 16:16:38 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:03:44.343 16:16:38 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:44.343 16:16:39 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:44.343 16:16:39 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.343 16:16:39 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.343 16:16:39 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.343 16:16:39 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.343 16:16:39 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.343 16:16:39 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.343 16:16:39 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.343 16:16:39 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.343 16:16:39 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.343 16:16:39 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.343 16:16:39 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.343 16:16:39 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:44.343 16:16:39 json_config -- scripts/common.sh@345 -- # : 1 00:03:44.343 16:16:39 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.343 16:16:39 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.343 16:16:39 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:44.343 16:16:39 json_config -- scripts/common.sh@353 -- # local d=1 00:03:44.343 16:16:39 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.343 16:16:39 json_config -- scripts/common.sh@355 -- # echo 1 00:03:44.343 16:16:39 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.343 16:16:39 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:44.343 16:16:39 json_config -- scripts/common.sh@353 -- # local d=2 00:03:44.343 16:16:39 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.343 16:16:39 json_config -- scripts/common.sh@355 -- # echo 2 00:03:44.343 16:16:39 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.343 16:16:39 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.343 16:16:39 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.343 16:16:39 json_config -- scripts/common.sh@368 -- # return 0 00:03:44.343 16:16:39 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.343 16:16:39 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:44.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.343 --rc genhtml_branch_coverage=1 00:03:44.343 --rc genhtml_function_coverage=1 00:03:44.343 --rc genhtml_legend=1 00:03:44.343 --rc geninfo_all_blocks=1 00:03:44.343 --rc geninfo_unexecuted_blocks=1 00:03:44.343 00:03:44.343 ' 00:03:44.343 16:16:39 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:44.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.343 --rc genhtml_branch_coverage=1 00:03:44.343 --rc genhtml_function_coverage=1 00:03:44.343 --rc genhtml_legend=1 00:03:44.343 --rc geninfo_all_blocks=1 00:03:44.343 --rc geninfo_unexecuted_blocks=1 00:03:44.343 00:03:44.343 ' 00:03:44.343 16:16:39 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:44.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.343 --rc genhtml_branch_coverage=1 00:03:44.343 --rc genhtml_function_coverage=1 00:03:44.343 --rc genhtml_legend=1 00:03:44.343 --rc geninfo_all_blocks=1 00:03:44.343 --rc geninfo_unexecuted_blocks=1 00:03:44.343 00:03:44.343 ' 00:03:44.343 16:16:39 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:44.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.343 --rc genhtml_branch_coverage=1 00:03:44.343 --rc genhtml_function_coverage=1 00:03:44.343 --rc genhtml_legend=1 00:03:44.343 --rc geninfo_all_blocks=1 00:03:44.343 --rc geninfo_unexecuted_blocks=1 00:03:44.343 00:03:44.343 ' 00:03:44.343 16:16:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:44.343 16:16:39 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:44.343 16:16:39 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.343 16:16:39 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.343 16:16:39 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.343 16:16:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.343 16:16:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.343 16:16:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.343 16:16:39 json_config -- paths/export.sh@5 -- # export PATH 00:03:44.343 16:16:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@51 -- # : 0 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:44.343 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:44.343 16:16:39 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:44.343 16:16:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:03:44.343 16:16:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:44.344 INFO: JSON configuration test init 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:44.344 16:16:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.344 16:16:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:44.344 16:16:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.344 16:16:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.344 16:16:39 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:44.344 16:16:39 json_config -- json_config/common.sh@9 -- # local app=target 00:03:44.603 16:16:39 json_config -- json_config/common.sh@10 -- # shift 00:03:44.603 16:16:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:44.603 16:16:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:44.603 16:16:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:44.603 16:16:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.603 16:16:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.603 16:16:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3594771 00:03:44.603 16:16:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:44.603 Waiting for target to run... 00:03:44.603 16:16:39 json_config -- json_config/common.sh@25 -- # waitforlisten 3594771 /var/tmp/spdk_tgt.sock 00:03:44.603 16:16:39 json_config -- common/autotest_common.sh@835 -- # '[' -z 3594771 ']' 00:03:44.603 16:16:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:44.603 16:16:39 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:44.603 16:16:39 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:44.603 16:16:39 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:44.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:44.603 16:16:39 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:44.603 16:16:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.603 [2024-12-06 16:16:39.121533] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:03:44.603 [2024-12-06 16:16:39.121582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594771 ] 00:03:44.862 [2024-12-06 16:16:39.396594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.862 [2024-12-06 16:16:39.427192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.430 16:16:39 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:45.430 16:16:39 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:45.430 16:16:39 json_config -- json_config/common.sh@26 -- # echo '' 00:03:45.430 00:03:45.430 16:16:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:45.430 16:16:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:45.430 16:16:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.430 16:16:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.430 16:16:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:45.430 16:16:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:45.430 16:16:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:45.430 16:16:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.430 16:16:39 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:45.430 16:16:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:45.430 16:16:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:48.716 16:16:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.716 16:16:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:48.716 16:16:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@54 -- # sort 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:48.716 16:16:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.716 16:16:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:48.716 16:16:43 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:48.716 16:16:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.716 16:16:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.717 16:16:43 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:48.717 16:16:43 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:03:48.717 16:16:43 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:03:48.717 16:16:43 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:03:48.717 16:16:43 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:03:48.717 16:16:43 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:03:48.717 16:16:43 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:03:48.717 16:16:43 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:03:48.717 16:16:43 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:03:48.717 16:16:43 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:03:48.717 16:16:43 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:03:48.717 16:16:43 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:03:48.717 16:16:43 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:03:48.717 16:16:43 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:03:48.717 16:16:43 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:03:48.717 16:16:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@320 -- # e810=() 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@321 -- # x722=() 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@322 -- # mlx=() 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:03:55.276 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:03:55.276 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:03:55.276 Found net devices under 0000:18:00.0: mlx_0_0 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:03:55.276 16:16:48 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:03:55.277 Found net devices under 0000:18:00.1: mlx_0_1 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@62 -- # uname 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@109 -- # continue 2 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@109 -- # continue 2 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:03:55.277 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:03:55.277 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:03:55.277 altname enp24s0f0np0 00:03:55.277 altname ens785f0np0 00:03:55.277 inet 192.168.100.8/24 scope global mlx_0_0 00:03:55.277 valid_lft forever preferred_lft forever 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:03:55.277 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:03:55.277 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:03:55.277 altname enp24s0f1np1 00:03:55.277 altname ens785f1np1 00:03:55.277 inet 192.168.100.9/24 scope global mlx_0_1 00:03:55.277 valid_lft forever preferred_lft forever 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@450 -- # return 0 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:03:55.277 16:16:48 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@109 -- # continue 2 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@109 -- # continue 2 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:03:55.277 192.168.100.9' 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:03:55.277 192.168.100.9' 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@485 -- # head -n 1 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:03:55.277 192.168.100.9' 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@486 -- # head -n 1 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:03:55.277 16:16:49 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:03:55.277 16:16:49 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:03:55.277 16:16:49 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:55.277 16:16:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:55.277 MallocForNvmf0 00:03:55.277 16:16:49 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:55.277 16:16:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:55.277 MallocForNvmf1 00:03:55.277 16:16:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:03:55.277 16:16:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:03:55.277 [2024-12-06 16:16:49.586936] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:03:55.278 [2024-12-06 16:16:49.615852] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18e1890/0x17b62c0) succeed. 00:03:55.278 [2024-12-06 16:16:49.627139] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18e08d0/0x1835f80) succeed. 00:03:55.278 16:16:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:55.278 16:16:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:55.278 16:16:49 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:55.278 16:16:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:55.278 16:16:49 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:55.278 16:16:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:55.536 16:16:50 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:03:55.536 16:16:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:03:55.795 [2024-12-06 16:16:50.337575] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:03:55.795 16:16:50 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:55.795 16:16:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.795 16:16:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.795 16:16:50 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:55.795 16:16:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:55.795 16:16:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.795 16:16:50 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:55.795 16:16:50 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.795 16:16:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:56.054 MallocBdevForConfigChangeCheck 00:03:56.054 16:16:50 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:56.054 16:16:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.054 16:16:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.054 16:16:50 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:56.054 16:16:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.312 16:16:50 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:56.312 INFO: shutting down applications... 00:03:56.312 16:16:50 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:56.312 16:16:50 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:56.312 16:16:50 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:56.312 16:16:50 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:00.500 Calling clear_iscsi_subsystem 00:04:00.500 Calling clear_nvmf_subsystem 00:04:00.500 Calling clear_nbd_subsystem 00:04:00.500 Calling clear_ublk_subsystem 00:04:00.500 Calling clear_vhost_blk_subsystem 00:04:00.500 Calling clear_vhost_scsi_subsystem 00:04:00.500 Calling clear_bdev_subsystem 00:04:00.500 16:16:54 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:00.500 16:16:54 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:00.500 16:16:54 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:00.500 16:16:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.500 16:16:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:00.500 16:16:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:00.500 16:16:55 json_config -- json_config/json_config.sh@352 -- # break 00:04:00.500 16:16:55 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:00.500 16:16:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:00.500 16:16:55 json_config -- json_config/common.sh@31 -- # local app=target 00:04:00.500 16:16:55 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:00.500 16:16:55 json_config -- json_config/common.sh@35 -- # [[ -n 3594771 ]] 00:04:00.500 16:16:55 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3594771 00:04:00.500 16:16:55 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:00.500 16:16:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.500 16:16:55 json_config -- json_config/common.sh@41 -- # kill -0 3594771 00:04:00.500 16:16:55 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:01.065 16:16:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:01.066 16:16:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:01.066 16:16:55 json_config -- json_config/common.sh@41 -- # kill -0 3594771 00:04:01.066 16:16:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:01.066 16:16:55 json_config -- json_config/common.sh@43 -- # break 00:04:01.066 16:16:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:01.066 16:16:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:01.066 SPDK target shutdown done 00:04:01.066 16:16:55 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:01.066 INFO: relaunching applications... 00:04:01.066 16:16:55 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.066 16:16:55 json_config -- json_config/common.sh@9 -- # local app=target 00:04:01.066 16:16:55 json_config -- json_config/common.sh@10 -- # shift 00:04:01.066 16:16:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:01.066 16:16:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:01.066 16:16:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:01.066 16:16:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.066 16:16:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.066 16:16:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3599986 00:04:01.066 16:16:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:01.066 Waiting for target to run... 00:04:01.066 16:16:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.066 16:16:55 json_config -- json_config/common.sh@25 -- # waitforlisten 3599986 /var/tmp/spdk_tgt.sock 00:04:01.066 16:16:55 json_config -- common/autotest_common.sh@835 -- # '[' -z 3599986 ']' 00:04:01.066 16:16:55 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:01.066 16:16:55 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.066 16:16:55 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:01.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:01.066 16:16:55 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.066 16:16:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.066 [2024-12-06 16:16:55.734119] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:01.066 [2024-12-06 16:16:55.734171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599986 ] 00:04:01.631 [2024-12-06 16:16:56.172510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.631 [2024-12-06 16:16:56.229206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.916 [2024-12-06 16:16:59.269427] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb27330/0xb33e40) succeed. 00:04:04.916 [2024-12-06 16:16:59.278491] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb2a580/0xbb3e80) succeed. 00:04:04.916 [2024-12-06 16:16:59.326407] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:05.174 16:16:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.174 16:16:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:05.174 16:16:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:05.174 00:04:05.174 16:16:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:05.174 16:16:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:05.174 INFO: Checking if target configuration is the same... 00:04:05.174 16:16:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:05.174 16:16:59 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.174 16:16:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.432 + '[' 2 -ne 2 ']' 00:04:05.432 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:05.432 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:05.432 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:05.432 +++ basename /dev/fd/62 00:04:05.432 ++ mktemp /tmp/62.XXX 00:04:05.432 + tmp_file_1=/tmp/62.pvp 00:04:05.432 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.432 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:05.432 + tmp_file_2=/tmp/spdk_tgt_config.json.9KI 00:04:05.432 + ret=0 00:04:05.432 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:05.690 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:05.690 + diff -u /tmp/62.pvp /tmp/spdk_tgt_config.json.9KI 00:04:05.690 + echo 'INFO: JSON config files are the same' 00:04:05.690 INFO: JSON config files are the same 00:04:05.690 + rm /tmp/62.pvp /tmp/spdk_tgt_config.json.9KI 00:04:05.690 + exit 0 00:04:05.690 16:17:00 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:05.690 16:17:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:05.690 INFO: changing configuration and checking if this can be detected... 00:04:05.690 16:17:00 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:05.690 16:17:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:05.989 16:17:00 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.989 16:17:00 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:05.989 16:17:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.989 + '[' 2 -ne 2 ']' 00:04:05.989 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:05.989 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:05.989 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:05.989 +++ basename /dev/fd/62 00:04:05.989 ++ mktemp /tmp/62.XXX 00:04:05.989 + tmp_file_1=/tmp/62.EZ1 00:04:05.989 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.989 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:05.990 + tmp_file_2=/tmp/spdk_tgt_config.json.LIX 00:04:05.990 + ret=0 00:04:05.990 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.247 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:06.247 + diff -u /tmp/62.EZ1 /tmp/spdk_tgt_config.json.LIX 00:04:06.247 + ret=1 00:04:06.247 + echo '=== Start of file: /tmp/62.EZ1 ===' 00:04:06.247 + cat /tmp/62.EZ1 00:04:06.247 + echo '=== End of file: /tmp/62.EZ1 ===' 00:04:06.247 + echo '' 00:04:06.247 + echo '=== Start of file: /tmp/spdk_tgt_config.json.LIX ===' 00:04:06.247 + cat /tmp/spdk_tgt_config.json.LIX 00:04:06.247 + echo '=== End of file: /tmp/spdk_tgt_config.json.LIX ===' 00:04:06.247 + echo '' 00:04:06.247 + rm /tmp/62.EZ1 /tmp/spdk_tgt_config.json.LIX 00:04:06.247 + exit 1 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:06.247 INFO: configuration change detected. 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 3599986 ]] 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.247 16:17:00 json_config -- json_config/json_config.sh@330 -- # killprocess 3599986 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@954 -- # '[' -z 3599986 ']' 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@958 -- # kill -0 3599986 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@959 -- # uname 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3599986 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3599986' 00:04:06.247 killing process with pid 3599986 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@973 -- # kill 3599986 00:04:06.247 16:17:00 json_config -- common/autotest_common.sh@978 -- # wait 3599986 00:04:10.431 16:17:04 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:10.431 16:17:04 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:10.431 16:17:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.431 16:17:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.431 16:17:04 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:10.431 16:17:04 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:10.431 INFO: Success 00:04:10.431 16:17:04 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:10.432 16:17:04 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:10.432 16:17:04 json_config -- nvmf/common.sh@121 -- # sync 00:04:10.432 16:17:04 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:04:10.432 16:17:04 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:04:10.432 16:17:04 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:04:10.432 16:17:04 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:10.432 16:17:04 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:04:10.432 00:04:10.432 real 0m25.920s 00:04:10.432 user 0m27.632s 00:04:10.432 sys 0m6.897s 00:04:10.432 16:17:04 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.432 16:17:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.432 ************************************ 00:04:10.432 END TEST json_config 00:04:10.432 ************************************ 00:04:10.432 16:17:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:10.432 16:17:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.432 16:17:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.432 16:17:04 -- common/autotest_common.sh@10 -- # set +x 00:04:10.432 ************************************ 00:04:10.432 START TEST json_config_extra_key 00:04:10.432 ************************************ 00:04:10.432 16:17:04 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:10.432 16:17:04 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:10.432 16:17:04 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:10.432 16:17:04 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:10.432 16:17:05 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.432 16:17:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:10.432 16:17:05 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.432 16:17:05 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:10.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.432 --rc genhtml_branch_coverage=1 00:04:10.432 --rc genhtml_function_coverage=1 00:04:10.432 --rc genhtml_legend=1 00:04:10.432 --rc geninfo_all_blocks=1 00:04:10.432 --rc geninfo_unexecuted_blocks=1 00:04:10.432 00:04:10.432 ' 00:04:10.432 16:17:05 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:10.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.432 --rc genhtml_branch_coverage=1 00:04:10.432 --rc genhtml_function_coverage=1 00:04:10.432 --rc genhtml_legend=1 00:04:10.432 --rc geninfo_all_blocks=1 00:04:10.432 --rc geninfo_unexecuted_blocks=1 00:04:10.432 00:04:10.432 ' 00:04:10.432 16:17:05 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:10.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.432 --rc genhtml_branch_coverage=1 00:04:10.432 --rc genhtml_function_coverage=1 00:04:10.432 --rc genhtml_legend=1 00:04:10.432 --rc geninfo_all_blocks=1 00:04:10.432 --rc geninfo_unexecuted_blocks=1 00:04:10.432 00:04:10.432 ' 00:04:10.432 16:17:05 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:10.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.432 --rc genhtml_branch_coverage=1 00:04:10.432 --rc genhtml_function_coverage=1 00:04:10.432 --rc genhtml_legend=1 00:04:10.432 --rc geninfo_all_blocks=1 00:04:10.432 --rc geninfo_unexecuted_blocks=1 00:04:10.432 00:04:10.432 ' 00:04:10.432 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:10.432 16:17:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:10.433 16:17:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:10.433 16:17:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:10.433 16:17:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:10.433 16:17:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:10.433 16:17:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.433 16:17:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.433 16:17:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.433 16:17:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:10.433 16:17:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:10.433 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:10.433 16:17:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:10.433 INFO: launching applications... 00:04:10.433 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3601906 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.433 Waiting for target to run... 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3601906 /var/tmp/spdk_tgt.sock 00:04:10.433 16:17:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:10.433 16:17:05 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3601906 ']' 00:04:10.433 16:17:05 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.433 16:17:05 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:10.433 16:17:05 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.433 16:17:05 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:10.433 16:17:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:10.433 [2024-12-06 16:17:05.101005] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:10.433 [2024-12-06 16:17:05.101049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601906 ] 00:04:11.000 [2024-12-06 16:17:05.509767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.000 [2024-12-06 16:17:05.557813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.259 16:17:05 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.259 16:17:05 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:11.259 16:17:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:11.259 00:04:11.259 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:11.259 INFO: shutting down applications... 00:04:11.259 16:17:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:11.259 16:17:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:11.259 16:17:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:11.259 16:17:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3601906 ]] 00:04:11.259 16:17:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3601906 00:04:11.259 16:17:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:11.259 16:17:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.259 16:17:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3601906 00:04:11.259 16:17:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.827 16:17:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.827 16:17:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.827 16:17:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3601906 00:04:11.827 16:17:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:11.827 16:17:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:11.827 16:17:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:11.827 16:17:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:11.827 SPDK target shutdown done 00:04:11.827 16:17:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:11.827 Success 00:04:11.827 00:04:11.827 real 0m1.524s 00:04:11.827 user 0m1.115s 00:04:11.827 sys 0m0.542s 00:04:11.827 16:17:06 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.827 16:17:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:11.827 ************************************ 00:04:11.827 END TEST json_config_extra_key 00:04:11.827 ************************************ 00:04:11.827 16:17:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.827 16:17:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.827 16:17:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.827 16:17:06 -- common/autotest_common.sh@10 -- # set +x 00:04:11.827 ************************************ 00:04:11.827 START TEST alias_rpc 00:04:11.827 ************************************ 00:04:11.827 16:17:06 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.827 * Looking for test storage... 00:04:11.827 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:11.827 16:17:06 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.087 16:17:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.087 --rc genhtml_branch_coverage=1 00:04:12.087 --rc genhtml_function_coverage=1 00:04:12.087 --rc genhtml_legend=1 00:04:12.087 --rc geninfo_all_blocks=1 00:04:12.087 --rc geninfo_unexecuted_blocks=1 00:04:12.087 00:04:12.087 ' 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.087 --rc genhtml_branch_coverage=1 00:04:12.087 --rc genhtml_function_coverage=1 00:04:12.087 --rc genhtml_legend=1 00:04:12.087 --rc geninfo_all_blocks=1 00:04:12.087 --rc geninfo_unexecuted_blocks=1 00:04:12.087 00:04:12.087 ' 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.087 --rc genhtml_branch_coverage=1 00:04:12.087 --rc genhtml_function_coverage=1 00:04:12.087 --rc genhtml_legend=1 00:04:12.087 --rc geninfo_all_blocks=1 00:04:12.087 --rc geninfo_unexecuted_blocks=1 00:04:12.087 00:04:12.087 ' 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.087 --rc genhtml_branch_coverage=1 00:04:12.087 --rc genhtml_function_coverage=1 00:04:12.087 --rc genhtml_legend=1 00:04:12.087 --rc geninfo_all_blocks=1 00:04:12.087 --rc geninfo_unexecuted_blocks=1 00:04:12.087 00:04:12.087 ' 00:04:12.087 16:17:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:12.087 16:17:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:12.087 16:17:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3602226 00:04:12.087 16:17:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3602226 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3602226 ']' 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.087 16:17:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.087 [2024-12-06 16:17:06.666365] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:12.087 [2024-12-06 16:17:06.666419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602226 ] 00:04:12.087 [2024-12-06 16:17:06.722238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.087 [2024-12-06 16:17:06.759207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.346 16:17:06 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.346 16:17:06 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:12.347 16:17:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:12.605 16:17:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3602226 00:04:12.605 16:17:07 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3602226 ']' 00:04:12.605 16:17:07 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3602226 00:04:12.605 16:17:07 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:12.605 16:17:07 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.605 16:17:07 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3602226 00:04:12.605 16:17:07 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.605 16:17:07 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.605 16:17:07 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3602226' 00:04:12.605 killing process with pid 3602226 00:04:12.605 16:17:07 alias_rpc -- common/autotest_common.sh@973 -- # kill 3602226 00:04:12.605 16:17:07 alias_rpc -- common/autotest_common.sh@978 -- # wait 3602226 00:04:12.864 00:04:12.864 real 0m1.058s 00:04:12.864 user 0m1.076s 00:04:12.864 sys 0m0.375s 00:04:12.864 16:17:07 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.864 16:17:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.864 ************************************ 00:04:12.864 END TEST alias_rpc 00:04:12.864 ************************************ 00:04:12.864 16:17:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:12.864 16:17:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:12.864 16:17:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.864 16:17:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.864 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:04:13.124 ************************************ 00:04:13.124 START TEST spdkcli_tcp 00:04:13.124 ************************************ 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:13.124 * Looking for test storage... 00:04:13.124 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.124 16:17:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:13.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.124 --rc genhtml_branch_coverage=1 00:04:13.124 --rc genhtml_function_coverage=1 00:04:13.124 --rc genhtml_legend=1 00:04:13.124 --rc geninfo_all_blocks=1 00:04:13.124 --rc geninfo_unexecuted_blocks=1 00:04:13.124 00:04:13.124 ' 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:13.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.124 --rc genhtml_branch_coverage=1 00:04:13.124 --rc genhtml_function_coverage=1 00:04:13.124 --rc genhtml_legend=1 00:04:13.124 --rc geninfo_all_blocks=1 00:04:13.124 --rc geninfo_unexecuted_blocks=1 00:04:13.124 00:04:13.124 ' 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:13.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.124 --rc genhtml_branch_coverage=1 00:04:13.124 --rc genhtml_function_coverage=1 00:04:13.124 --rc genhtml_legend=1 00:04:13.124 --rc geninfo_all_blocks=1 00:04:13.124 --rc geninfo_unexecuted_blocks=1 00:04:13.124 00:04:13.124 ' 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:13.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.124 --rc genhtml_branch_coverage=1 00:04:13.124 --rc genhtml_function_coverage=1 00:04:13.124 --rc genhtml_legend=1 00:04:13.124 --rc geninfo_all_blocks=1 00:04:13.124 --rc geninfo_unexecuted_blocks=1 00:04:13.124 00:04:13.124 ' 00:04:13.124 16:17:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:04:13.124 16:17:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:13.124 16:17:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:04:13.124 16:17:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:13.124 16:17:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:13.124 16:17:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:13.124 16:17:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.124 16:17:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3602551 00:04:13.124 16:17:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3602551 00:04:13.124 16:17:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3602551 ']' 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:13.124 16:17:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.124 [2024-12-06 16:17:07.812704] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:13.124 [2024-12-06 16:17:07.812749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602551 ] 00:04:13.384 [2024-12-06 16:17:07.869522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.384 [2024-12-06 16:17:07.909930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.384 [2024-12-06 16:17:07.909934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.643 16:17:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.643 16:17:08 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:13.643 16:17:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3602559 00:04:13.643 16:17:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:13.643 16:17:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:13.643 [ 00:04:13.643 "bdev_malloc_delete", 00:04:13.643 "bdev_malloc_create", 00:04:13.643 "bdev_null_resize", 00:04:13.643 "bdev_null_delete", 00:04:13.643 "bdev_null_create", 00:04:13.643 "bdev_nvme_cuse_unregister", 00:04:13.643 "bdev_nvme_cuse_register", 00:04:13.643 "bdev_opal_new_user", 00:04:13.643 "bdev_opal_set_lock_state", 00:04:13.643 "bdev_opal_delete", 00:04:13.643 "bdev_opal_get_info", 00:04:13.643 "bdev_opal_create", 00:04:13.643 "bdev_nvme_opal_revert", 00:04:13.643 "bdev_nvme_opal_init", 00:04:13.643 "bdev_nvme_send_cmd", 00:04:13.643 "bdev_nvme_set_keys", 00:04:13.643 "bdev_nvme_get_path_iostat", 00:04:13.643 "bdev_nvme_get_mdns_discovery_info", 00:04:13.643 "bdev_nvme_stop_mdns_discovery", 00:04:13.643 "bdev_nvme_start_mdns_discovery", 00:04:13.643 "bdev_nvme_set_multipath_policy", 00:04:13.643 "bdev_nvme_set_preferred_path", 00:04:13.643 "bdev_nvme_get_io_paths", 00:04:13.643 "bdev_nvme_remove_error_injection", 00:04:13.643 "bdev_nvme_add_error_injection", 00:04:13.643 "bdev_nvme_get_discovery_info", 00:04:13.643 "bdev_nvme_stop_discovery", 00:04:13.643 "bdev_nvme_start_discovery", 00:04:13.643 "bdev_nvme_get_controller_health_info", 00:04:13.643 "bdev_nvme_disable_controller", 00:04:13.643 "bdev_nvme_enable_controller", 00:04:13.643 "bdev_nvme_reset_controller", 00:04:13.643 "bdev_nvme_get_transport_statistics", 00:04:13.643 "bdev_nvme_apply_firmware", 00:04:13.643 "bdev_nvme_detach_controller", 00:04:13.643 "bdev_nvme_get_controllers", 00:04:13.643 "bdev_nvme_attach_controller", 00:04:13.643 "bdev_nvme_set_hotplug", 00:04:13.643 "bdev_nvme_set_options", 00:04:13.643 "bdev_passthru_delete", 00:04:13.643 "bdev_passthru_create", 00:04:13.643 "bdev_lvol_set_parent_bdev", 00:04:13.643 "bdev_lvol_set_parent", 00:04:13.643 "bdev_lvol_check_shallow_copy", 00:04:13.643 "bdev_lvol_start_shallow_copy", 00:04:13.643 "bdev_lvol_grow_lvstore", 00:04:13.643 "bdev_lvol_get_lvols", 00:04:13.643 "bdev_lvol_get_lvstores", 00:04:13.643 "bdev_lvol_delete", 00:04:13.643 "bdev_lvol_set_read_only", 00:04:13.643 "bdev_lvol_resize", 00:04:13.643 "bdev_lvol_decouple_parent", 00:04:13.643 "bdev_lvol_inflate", 00:04:13.643 "bdev_lvol_rename", 00:04:13.644 "bdev_lvol_clone_bdev", 00:04:13.644 "bdev_lvol_clone", 00:04:13.644 "bdev_lvol_snapshot", 00:04:13.644 "bdev_lvol_create", 00:04:13.644 "bdev_lvol_delete_lvstore", 00:04:13.644 "bdev_lvol_rename_lvstore", 00:04:13.644 "bdev_lvol_create_lvstore", 00:04:13.644 "bdev_raid_set_options", 00:04:13.644 "bdev_raid_remove_base_bdev", 00:04:13.644 "bdev_raid_add_base_bdev", 00:04:13.644 "bdev_raid_delete", 00:04:13.644 "bdev_raid_create", 00:04:13.644 "bdev_raid_get_bdevs", 00:04:13.644 "bdev_error_inject_error", 00:04:13.644 "bdev_error_delete", 00:04:13.644 "bdev_error_create", 00:04:13.644 "bdev_split_delete", 00:04:13.644 "bdev_split_create", 00:04:13.644 "bdev_delay_delete", 00:04:13.644 "bdev_delay_create", 00:04:13.644 "bdev_delay_update_latency", 00:04:13.644 "bdev_zone_block_delete", 00:04:13.644 "bdev_zone_block_create", 00:04:13.644 "blobfs_create", 00:04:13.644 "blobfs_detect", 00:04:13.644 "blobfs_set_cache_size", 00:04:13.644 "bdev_aio_delete", 00:04:13.644 "bdev_aio_rescan", 00:04:13.644 "bdev_aio_create", 00:04:13.644 "bdev_ftl_set_property", 00:04:13.644 "bdev_ftl_get_properties", 00:04:13.644 "bdev_ftl_get_stats", 00:04:13.644 "bdev_ftl_unmap", 00:04:13.644 "bdev_ftl_unload", 00:04:13.644 "bdev_ftl_delete", 00:04:13.644 "bdev_ftl_load", 00:04:13.644 "bdev_ftl_create", 00:04:13.644 "bdev_virtio_attach_controller", 00:04:13.644 "bdev_virtio_scsi_get_devices", 00:04:13.644 "bdev_virtio_detach_controller", 00:04:13.644 "bdev_virtio_blk_set_hotplug", 00:04:13.644 "bdev_iscsi_delete", 00:04:13.644 "bdev_iscsi_create", 00:04:13.644 "bdev_iscsi_set_options", 00:04:13.644 "accel_error_inject_error", 00:04:13.644 "ioat_scan_accel_module", 00:04:13.644 "dsa_scan_accel_module", 00:04:13.644 "iaa_scan_accel_module", 00:04:13.644 "keyring_file_remove_key", 00:04:13.644 "keyring_file_add_key", 00:04:13.644 "keyring_linux_set_options", 00:04:13.644 "fsdev_aio_delete", 00:04:13.644 "fsdev_aio_create", 00:04:13.644 "iscsi_get_histogram", 00:04:13.644 "iscsi_enable_histogram", 00:04:13.644 "iscsi_set_options", 00:04:13.644 "iscsi_get_auth_groups", 00:04:13.644 "iscsi_auth_group_remove_secret", 00:04:13.644 "iscsi_auth_group_add_secret", 00:04:13.644 "iscsi_delete_auth_group", 00:04:13.644 "iscsi_create_auth_group", 00:04:13.644 "iscsi_set_discovery_auth", 00:04:13.644 "iscsi_get_options", 00:04:13.644 "iscsi_target_node_request_logout", 00:04:13.644 "iscsi_target_node_set_redirect", 00:04:13.644 "iscsi_target_node_set_auth", 00:04:13.644 "iscsi_target_node_add_lun", 00:04:13.644 "iscsi_get_stats", 00:04:13.644 "iscsi_get_connections", 00:04:13.644 "iscsi_portal_group_set_auth", 00:04:13.644 "iscsi_start_portal_group", 00:04:13.644 "iscsi_delete_portal_group", 00:04:13.644 "iscsi_create_portal_group", 00:04:13.644 "iscsi_get_portal_groups", 00:04:13.644 "iscsi_delete_target_node", 00:04:13.644 "iscsi_target_node_remove_pg_ig_maps", 00:04:13.644 "iscsi_target_node_add_pg_ig_maps", 00:04:13.644 "iscsi_create_target_node", 00:04:13.644 "iscsi_get_target_nodes", 00:04:13.644 "iscsi_delete_initiator_group", 00:04:13.644 "iscsi_initiator_group_remove_initiators", 00:04:13.644 "iscsi_initiator_group_add_initiators", 00:04:13.644 "iscsi_create_initiator_group", 00:04:13.644 "iscsi_get_initiator_groups", 00:04:13.644 "nvmf_set_crdt", 00:04:13.644 "nvmf_set_config", 00:04:13.644 "nvmf_set_max_subsystems", 00:04:13.644 "nvmf_stop_mdns_prr", 00:04:13.644 "nvmf_publish_mdns_prr", 00:04:13.644 "nvmf_subsystem_get_listeners", 00:04:13.644 "nvmf_subsystem_get_qpairs", 00:04:13.644 "nvmf_subsystem_get_controllers", 00:04:13.644 "nvmf_get_stats", 00:04:13.644 "nvmf_get_transports", 00:04:13.644 "nvmf_create_transport", 00:04:13.644 "nvmf_get_targets", 00:04:13.644 "nvmf_delete_target", 00:04:13.644 "nvmf_create_target", 00:04:13.644 "nvmf_subsystem_allow_any_host", 00:04:13.644 "nvmf_subsystem_set_keys", 00:04:13.644 "nvmf_subsystem_remove_host", 00:04:13.644 "nvmf_subsystem_add_host", 00:04:13.644 "nvmf_ns_remove_host", 00:04:13.644 "nvmf_ns_add_host", 00:04:13.644 "nvmf_subsystem_remove_ns", 00:04:13.644 "nvmf_subsystem_set_ns_ana_group", 00:04:13.644 "nvmf_subsystem_add_ns", 00:04:13.644 "nvmf_subsystem_listener_set_ana_state", 00:04:13.644 "nvmf_discovery_get_referrals", 00:04:13.644 "nvmf_discovery_remove_referral", 00:04:13.644 "nvmf_discovery_add_referral", 00:04:13.644 "nvmf_subsystem_remove_listener", 00:04:13.644 "nvmf_subsystem_add_listener", 00:04:13.644 "nvmf_delete_subsystem", 00:04:13.644 "nvmf_create_subsystem", 00:04:13.644 "nvmf_get_subsystems", 00:04:13.644 "env_dpdk_get_mem_stats", 00:04:13.644 "nbd_get_disks", 00:04:13.644 "nbd_stop_disk", 00:04:13.644 "nbd_start_disk", 00:04:13.644 "ublk_recover_disk", 00:04:13.644 "ublk_get_disks", 00:04:13.644 "ublk_stop_disk", 00:04:13.644 "ublk_start_disk", 00:04:13.644 "ublk_destroy_target", 00:04:13.644 "ublk_create_target", 00:04:13.644 "virtio_blk_create_transport", 00:04:13.644 "virtio_blk_get_transports", 00:04:13.644 "vhost_controller_set_coalescing", 00:04:13.644 "vhost_get_controllers", 00:04:13.644 "vhost_delete_controller", 00:04:13.644 "vhost_create_blk_controller", 00:04:13.644 "vhost_scsi_controller_remove_target", 00:04:13.644 "vhost_scsi_controller_add_target", 00:04:13.644 "vhost_start_scsi_controller", 00:04:13.644 "vhost_create_scsi_controller", 00:04:13.644 "thread_set_cpumask", 00:04:13.644 "scheduler_set_options", 00:04:13.644 "framework_get_governor", 00:04:13.644 "framework_get_scheduler", 00:04:13.644 "framework_set_scheduler", 00:04:13.644 "framework_get_reactors", 00:04:13.644 "thread_get_io_channels", 00:04:13.644 "thread_get_pollers", 00:04:13.644 "thread_get_stats", 00:04:13.644 "framework_monitor_context_switch", 00:04:13.644 "spdk_kill_instance", 00:04:13.644 "log_enable_timestamps", 00:04:13.644 "log_get_flags", 00:04:13.644 "log_clear_flag", 00:04:13.644 "log_set_flag", 00:04:13.644 "log_get_level", 00:04:13.644 "log_set_level", 00:04:13.644 "log_get_print_level", 00:04:13.644 "log_set_print_level", 00:04:13.644 "framework_enable_cpumask_locks", 00:04:13.644 "framework_disable_cpumask_locks", 00:04:13.644 "framework_wait_init", 00:04:13.644 "framework_start_init", 00:04:13.644 "scsi_get_devices", 00:04:13.644 "bdev_get_histogram", 00:04:13.644 "bdev_enable_histogram", 00:04:13.644 "bdev_set_qos_limit", 00:04:13.644 "bdev_set_qd_sampling_period", 00:04:13.644 "bdev_get_bdevs", 00:04:13.644 "bdev_reset_iostat", 00:04:13.644 "bdev_get_iostat", 00:04:13.644 "bdev_examine", 00:04:13.644 "bdev_wait_for_examine", 00:04:13.644 "bdev_set_options", 00:04:13.644 "accel_get_stats", 00:04:13.644 "accel_set_options", 00:04:13.644 "accel_set_driver", 00:04:13.644 "accel_crypto_key_destroy", 00:04:13.644 "accel_crypto_keys_get", 00:04:13.644 "accel_crypto_key_create", 00:04:13.644 "accel_assign_opc", 00:04:13.644 "accel_get_module_info", 00:04:13.644 "accel_get_opc_assignments", 00:04:13.644 "vmd_rescan", 00:04:13.644 "vmd_remove_device", 00:04:13.644 "vmd_enable", 00:04:13.644 "sock_get_default_impl", 00:04:13.644 "sock_set_default_impl", 00:04:13.644 "sock_impl_set_options", 00:04:13.644 "sock_impl_get_options", 00:04:13.644 "iobuf_get_stats", 00:04:13.644 "iobuf_set_options", 00:04:13.644 "keyring_get_keys", 00:04:13.644 "framework_get_pci_devices", 00:04:13.644 "framework_get_config", 00:04:13.644 "framework_get_subsystems", 00:04:13.644 "fsdev_set_opts", 00:04:13.644 "fsdev_get_opts", 00:04:13.644 "trace_get_info", 00:04:13.644 "trace_get_tpoint_group_mask", 00:04:13.644 "trace_disable_tpoint_group", 00:04:13.644 "trace_enable_tpoint_group", 00:04:13.644 "trace_clear_tpoint_mask", 00:04:13.644 "trace_set_tpoint_mask", 00:04:13.644 "notify_get_notifications", 00:04:13.644 "notify_get_types", 00:04:13.644 "spdk_get_version", 00:04:13.644 "rpc_get_methods" 00:04:13.644 ] 00:04:13.644 16:17:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:13.644 16:17:08 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:13.644 16:17:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:13.644 16:17:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:13.644 16:17:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3602551 00:04:13.644 16:17:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3602551 ']' 00:04:13.644 16:17:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3602551 00:04:13.644 16:17:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:13.644 16:17:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.644 16:17:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3602551 00:04:13.904 16:17:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.904 16:17:08 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.904 16:17:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3602551' 00:04:13.904 killing process with pid 3602551 00:04:13.904 16:17:08 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3602551 00:04:13.904 16:17:08 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3602551 00:04:14.163 00:04:14.163 real 0m1.096s 00:04:14.163 user 0m1.842s 00:04:14.163 sys 0m0.439s 00:04:14.163 16:17:08 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.163 16:17:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:14.163 ************************************ 00:04:14.163 END TEST spdkcli_tcp 00:04:14.163 ************************************ 00:04:14.163 16:17:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:14.163 16:17:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.163 16:17:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.163 16:17:08 -- common/autotest_common.sh@10 -- # set +x 00:04:14.163 ************************************ 00:04:14.163 START TEST dpdk_mem_utility 00:04:14.163 ************************************ 00:04:14.163 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:14.163 * Looking for test storage... 00:04:14.163 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:04:14.163 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:14.163 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:14.163 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:14.422 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:14.422 16:17:08 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.423 16:17:08 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.423 16:17:08 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.423 16:17:08 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:14.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.423 --rc genhtml_branch_coverage=1 00:04:14.423 --rc genhtml_function_coverage=1 00:04:14.423 --rc genhtml_legend=1 00:04:14.423 --rc geninfo_all_blocks=1 00:04:14.423 --rc geninfo_unexecuted_blocks=1 00:04:14.423 00:04:14.423 ' 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:14.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.423 --rc genhtml_branch_coverage=1 00:04:14.423 --rc genhtml_function_coverage=1 00:04:14.423 --rc genhtml_legend=1 00:04:14.423 --rc geninfo_all_blocks=1 00:04:14.423 --rc geninfo_unexecuted_blocks=1 00:04:14.423 00:04:14.423 ' 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:14.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.423 --rc genhtml_branch_coverage=1 00:04:14.423 --rc genhtml_function_coverage=1 00:04:14.423 --rc genhtml_legend=1 00:04:14.423 --rc geninfo_all_blocks=1 00:04:14.423 --rc geninfo_unexecuted_blocks=1 00:04:14.423 00:04:14.423 ' 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:14.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.423 --rc genhtml_branch_coverage=1 00:04:14.423 --rc genhtml_function_coverage=1 00:04:14.423 --rc genhtml_legend=1 00:04:14.423 --rc geninfo_all_blocks=1 00:04:14.423 --rc geninfo_unexecuted_blocks=1 00:04:14.423 00:04:14.423 ' 00:04:14.423 16:17:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:14.423 16:17:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3602887 00:04:14.423 16:17:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3602887 00:04:14.423 16:17:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3602887 ']' 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.423 16:17:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:14.423 [2024-12-06 16:17:08.963656] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:14.423 [2024-12-06 16:17:08.963700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602887 ] 00:04:14.423 [2024-12-06 16:17:09.020478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.423 [2024-12-06 16:17:09.057264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.682 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.682 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:14.682 16:17:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:14.682 16:17:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:14.682 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.682 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:14.682 { 00:04:14.682 "filename": "/tmp/spdk_mem_dump.txt" 00:04:14.682 } 00:04:14.682 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.682 16:17:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:14.682 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:14.682 1 heaps totaling size 818.000000 MiB 00:04:14.682 size: 818.000000 MiB heap id: 0 00:04:14.682 end heaps---------- 00:04:14.682 9 mempools totaling size 603.782043 MiB 00:04:14.682 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:14.682 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:14.682 size: 100.555481 MiB name: bdev_io_3602887 00:04:14.682 size: 50.003479 MiB name: msgpool_3602887 00:04:14.682 size: 36.509338 MiB name: fsdev_io_3602887 00:04:14.682 size: 21.763794 MiB name: PDU_Pool 00:04:14.682 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:14.682 size: 4.133484 MiB name: evtpool_3602887 00:04:14.682 size: 0.026123 MiB name: Session_Pool 00:04:14.682 end mempools------- 00:04:14.682 6 memzones totaling size 4.142822 MiB 00:04:14.682 size: 1.000366 MiB name: RG_ring_0_3602887 00:04:14.682 size: 1.000366 MiB name: RG_ring_1_3602887 00:04:14.682 size: 1.000366 MiB name: RG_ring_4_3602887 00:04:14.682 size: 1.000366 MiB name: RG_ring_5_3602887 00:04:14.682 size: 0.125366 MiB name: RG_ring_2_3602887 00:04:14.682 size: 0.015991 MiB name: RG_ring_3_3602887 00:04:14.682 end memzones------- 00:04:14.682 16:17:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:14.682 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:14.682 list of free elements. size: 10.852478 MiB 00:04:14.682 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:14.682 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:14.682 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:14.682 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:14.682 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:14.682 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:14.682 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:14.682 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:14.682 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:14.682 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:14.682 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:14.683 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:14.683 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:14.683 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:14.683 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:14.683 list of standard malloc elements. size: 199.218628 MiB 00:04:14.683 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:14.683 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:14.683 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:14.683 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:14.683 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:14.683 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:14.683 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:14.683 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:14.683 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:14.683 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:14.683 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:14.683 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:14.683 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:14.683 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:14.683 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:14.683 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:14.683 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:14.683 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:14.683 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:14.683 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:14.683 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:14.683 list of memzone associated elements. size: 607.928894 MiB 00:04:14.683 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:14.683 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:14.683 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:14.683 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:14.683 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:14.683 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3602887_0 00:04:14.683 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:14.683 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3602887_0 00:04:14.683 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:14.683 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3602887_0 00:04:14.683 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:14.683 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:14.683 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:14.683 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:14.683 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:14.683 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3602887_0 00:04:14.683 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:14.683 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3602887 00:04:14.683 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:14.683 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3602887 00:04:14.683 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:14.683 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:14.683 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:14.683 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:14.683 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:14.683 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:14.683 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:14.683 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:14.683 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:14.683 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3602887 00:04:14.683 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:14.683 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3602887 00:04:14.683 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:14.683 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3602887 00:04:14.683 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:14.683 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3602887 00:04:14.683 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:14.683 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3602887 00:04:14.683 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:14.683 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3602887 00:04:14.683 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:14.683 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:14.683 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:14.683 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:14.683 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:14.683 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:14.683 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:14.683 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3602887 00:04:14.683 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:14.683 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3602887 00:04:14.683 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:14.683 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:14.683 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:14.683 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:14.683 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:14.683 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3602887 00:04:14.683 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:14.683 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:14.683 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:14.683 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3602887 00:04:14.683 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:14.683 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3602887 00:04:14.683 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:14.683 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3602887 00:04:14.683 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:14.683 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:14.683 16:17:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:14.683 16:17:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3602887 00:04:14.683 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3602887 ']' 00:04:14.683 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3602887 00:04:14.683 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:14.683 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.683 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3602887 00:04:14.942 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.942 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.942 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3602887' 00:04:14.942 killing process with pid 3602887 00:04:14.942 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3602887 00:04:14.942 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3602887 00:04:15.201 00:04:15.201 real 0m0.965s 00:04:15.201 user 0m0.897s 00:04:15.201 sys 0m0.380s 00:04:15.201 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.201 16:17:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:15.201 ************************************ 00:04:15.201 END TEST dpdk_mem_utility 00:04:15.201 ************************************ 00:04:15.201 16:17:09 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:15.201 16:17:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.201 16:17:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.201 16:17:09 -- common/autotest_common.sh@10 -- # set +x 00:04:15.201 ************************************ 00:04:15.201 START TEST event 00:04:15.201 ************************************ 00:04:15.201 16:17:09 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:15.201 * Looking for test storage... 00:04:15.201 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:15.201 16:17:09 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.201 16:17:09 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.201 16:17:09 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.201 16:17:09 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.201 16:17:09 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.201 16:17:09 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.201 16:17:09 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.201 16:17:09 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.201 16:17:09 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.201 16:17:09 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.201 16:17:09 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.201 16:17:09 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.201 16:17:09 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.201 16:17:09 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.201 16:17:09 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.460 16:17:09 event -- scripts/common.sh@344 -- # case "$op" in 00:04:15.460 16:17:09 event -- scripts/common.sh@345 -- # : 1 00:04:15.460 16:17:09 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.460 16:17:09 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.460 16:17:09 event -- scripts/common.sh@365 -- # decimal 1 00:04:15.460 16:17:09 event -- scripts/common.sh@353 -- # local d=1 00:04:15.460 16:17:09 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.460 16:17:09 event -- scripts/common.sh@355 -- # echo 1 00:04:15.460 16:17:09 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.460 16:17:09 event -- scripts/common.sh@366 -- # decimal 2 00:04:15.460 16:17:09 event -- scripts/common.sh@353 -- # local d=2 00:04:15.460 16:17:09 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.460 16:17:09 event -- scripts/common.sh@355 -- # echo 2 00:04:15.460 16:17:09 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.460 16:17:09 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.460 16:17:09 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.460 16:17:09 event -- scripts/common.sh@368 -- # return 0 00:04:15.460 16:17:09 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.460 16:17:09 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.460 --rc genhtml_branch_coverage=1 00:04:15.460 --rc genhtml_function_coverage=1 00:04:15.460 --rc genhtml_legend=1 00:04:15.460 --rc geninfo_all_blocks=1 00:04:15.460 --rc geninfo_unexecuted_blocks=1 00:04:15.460 00:04:15.460 ' 00:04:15.460 16:17:09 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.460 --rc genhtml_branch_coverage=1 00:04:15.460 --rc genhtml_function_coverage=1 00:04:15.460 --rc genhtml_legend=1 00:04:15.460 --rc geninfo_all_blocks=1 00:04:15.460 --rc geninfo_unexecuted_blocks=1 00:04:15.460 00:04:15.460 ' 00:04:15.460 16:17:09 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.460 --rc genhtml_branch_coverage=1 00:04:15.460 --rc genhtml_function_coverage=1 00:04:15.460 --rc genhtml_legend=1 00:04:15.460 --rc geninfo_all_blocks=1 00:04:15.460 --rc geninfo_unexecuted_blocks=1 00:04:15.460 00:04:15.460 ' 00:04:15.461 16:17:09 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.461 --rc genhtml_branch_coverage=1 00:04:15.461 --rc genhtml_function_coverage=1 00:04:15.461 --rc genhtml_legend=1 00:04:15.461 --rc geninfo_all_blocks=1 00:04:15.461 --rc geninfo_unexecuted_blocks=1 00:04:15.461 00:04:15.461 ' 00:04:15.461 16:17:09 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:15.461 16:17:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:15.461 16:17:09 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:15.461 16:17:09 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:15.461 16:17:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.461 16:17:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.461 ************************************ 00:04:15.461 START TEST event_perf 00:04:15.461 ************************************ 00:04:15.461 16:17:09 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:15.461 Running I/O for 1 seconds...[2024-12-06 16:17:09.993781] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:15.461 [2024-12-06 16:17:09.993848] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603107 ] 00:04:15.461 [2024-12-06 16:17:10.059622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:15.461 [2024-12-06 16:17:10.103528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.461 [2024-12-06 16:17:10.103612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:15.461 [2024-12-06 16:17:10.103697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.461 [2024-12-06 16:17:10.103701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.839 Running I/O for 1 seconds... 00:04:16.839 lcore 0: 221630 00:04:16.839 lcore 1: 221628 00:04:16.839 lcore 2: 221628 00:04:16.839 lcore 3: 221630 00:04:16.839 done. 00:04:16.839 00:04:16.839 real 0m1.169s 00:04:16.839 user 0m4.096s 00:04:16.839 sys 0m0.072s 00:04:16.839 16:17:11 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.839 16:17:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:16.839 ************************************ 00:04:16.839 END TEST event_perf 00:04:16.839 ************************************ 00:04:16.839 16:17:11 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:16.839 16:17:11 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:16.839 16:17:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.839 16:17:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.839 ************************************ 00:04:16.839 START TEST event_reactor 00:04:16.839 ************************************ 00:04:16.839 16:17:11 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:16.839 [2024-12-06 16:17:11.230408] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:16.839 [2024-12-06 16:17:11.230476] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603272 ] 00:04:16.839 [2024-12-06 16:17:11.293482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.839 [2024-12-06 16:17:11.331242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.776 test_start 00:04:17.776 oneshot 00:04:17.776 tick 100 00:04:17.776 tick 100 00:04:17.776 tick 250 00:04:17.776 tick 100 00:04:17.776 tick 100 00:04:17.776 tick 100 00:04:17.776 tick 250 00:04:17.776 tick 500 00:04:17.776 tick 100 00:04:17.776 tick 100 00:04:17.776 tick 250 00:04:17.776 tick 100 00:04:17.776 tick 100 00:04:17.776 test_end 00:04:17.776 00:04:17.776 real 0m1.155s 00:04:17.776 user 0m1.089s 00:04:17.776 sys 0m0.062s 00:04:17.776 16:17:12 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.776 16:17:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:17.776 ************************************ 00:04:17.776 END TEST event_reactor 00:04:17.776 ************************************ 00:04:17.776 16:17:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.776 16:17:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:17.776 16:17:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.776 16:17:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.776 ************************************ 00:04:17.776 START TEST event_reactor_perf 00:04:17.776 ************************************ 00:04:17.776 16:17:12 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:17.776 [2024-12-06 16:17:12.451652] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:17.776 [2024-12-06 16:17:12.451723] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603528 ] 00:04:18.035 [2024-12-06 16:17:12.517350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.035 [2024-12-06 16:17:12.553910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.974 test_start 00:04:18.974 test_end 00:04:18.974 Performance: 551979 events per second 00:04:18.974 00:04:18.974 real 0m1.160s 00:04:18.974 user 0m1.093s 00:04:18.974 sys 0m0.063s 00:04:18.974 16:17:13 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.974 16:17:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:18.974 ************************************ 00:04:18.974 END TEST event_reactor_perf 00:04:18.974 ************************************ 00:04:18.974 16:17:13 event -- event/event.sh@49 -- # uname -s 00:04:18.974 16:17:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:18.974 16:17:13 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:18.974 16:17:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.974 16:17:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.974 16:17:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.974 ************************************ 00:04:18.974 START TEST event_scheduler 00:04:18.974 ************************************ 00:04:18.974 16:17:13 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:19.234 * Looking for test storage... 00:04:19.234 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:04:19.234 16:17:13 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:19.234 16:17:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:19.234 16:17:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:19.234 16:17:13 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.235 16:17:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:19.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.235 --rc genhtml_branch_coverage=1 00:04:19.235 --rc genhtml_function_coverage=1 00:04:19.235 --rc genhtml_legend=1 00:04:19.235 --rc geninfo_all_blocks=1 00:04:19.235 --rc geninfo_unexecuted_blocks=1 00:04:19.235 00:04:19.235 ' 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:19.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.235 --rc genhtml_branch_coverage=1 00:04:19.235 --rc genhtml_function_coverage=1 00:04:19.235 --rc genhtml_legend=1 00:04:19.235 --rc geninfo_all_blocks=1 00:04:19.235 --rc geninfo_unexecuted_blocks=1 00:04:19.235 00:04:19.235 ' 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:19.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.235 --rc genhtml_branch_coverage=1 00:04:19.235 --rc genhtml_function_coverage=1 00:04:19.235 --rc genhtml_legend=1 00:04:19.235 --rc geninfo_all_blocks=1 00:04:19.235 --rc geninfo_unexecuted_blocks=1 00:04:19.235 00:04:19.235 ' 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:19.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.235 --rc genhtml_branch_coverage=1 00:04:19.235 --rc genhtml_function_coverage=1 00:04:19.235 --rc genhtml_legend=1 00:04:19.235 --rc geninfo_all_blocks=1 00:04:19.235 --rc geninfo_unexecuted_blocks=1 00:04:19.235 00:04:19.235 ' 00:04:19.235 16:17:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:19.235 16:17:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3603846 00:04:19.235 16:17:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.235 16:17:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:19.235 16:17:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3603846 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3603846 ']' 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.235 16:17:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:19.235 [2024-12-06 16:17:13.876889] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:19.235 [2024-12-06 16:17:13.876937] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603846 ] 00:04:19.235 [2024-12-06 16:17:13.932161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:19.495 [2024-12-06 16:17:13.975327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.495 [2024-12-06 16:17:13.975346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.495 [2024-12-06 16:17:13.975430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:19.495 [2024-12-06 16:17:13.975434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:19.495 16:17:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:19.495 [2024-12-06 16:17:14.044015] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:19.495 [2024-12-06 16:17:14.044032] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:19.495 [2024-12-06 16:17:14.044040] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:19.495 [2024-12-06 16:17:14.044045] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:19.495 [2024-12-06 16:17:14.044050] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.495 16:17:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:19.495 [2024-12-06 16:17:14.118079] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.495 16:17:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.495 16:17:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:19.495 ************************************ 00:04:19.495 START TEST scheduler_create_thread 00:04:19.495 ************************************ 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.495 2 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.495 3 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.495 4 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.495 5 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.495 6 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.495 7 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.495 8 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.495 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:19.496 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.496 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.496 9 00:04:19.496 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.496 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:19.496 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.496 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.755 10 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.755 16:17:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:21.132 16:17:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:21.132 16:17:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:21.132 16:17:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:21.132 16:17:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.132 16:17:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.067 16:17:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.067 00:04:22.067 real 0m2.616s 00:04:22.067 user 0m0.023s 00:04:22.067 sys 0m0.005s 00:04:22.067 16:17:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.067 16:17:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:22.067 ************************************ 00:04:22.067 END TEST scheduler_create_thread 00:04:22.067 ************************************ 00:04:22.326 16:17:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:22.326 16:17:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3603846 00:04:22.326 16:17:16 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3603846 ']' 00:04:22.326 16:17:16 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3603846 00:04:22.326 16:17:16 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:22.326 16:17:16 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.326 16:17:16 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603846 00:04:22.326 16:17:16 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:22.326 16:17:16 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:22.326 16:17:16 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603846' 00:04:22.326 killing process with pid 3603846 00:04:22.326 16:17:16 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3603846 00:04:22.326 16:17:16 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3603846 00:04:22.584 [2024-12-06 16:17:17.247953] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:22.842 00:04:22.842 real 0m3.749s 00:04:22.842 user 0m5.676s 00:04:22.842 sys 0m0.347s 00:04:22.842 16:17:17 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.842 16:17:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:22.842 ************************************ 00:04:22.842 END TEST event_scheduler 00:04:22.842 ************************************ 00:04:22.842 16:17:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:22.842 16:17:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:22.842 16:17:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.842 16:17:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.842 16:17:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.842 ************************************ 00:04:22.842 START TEST app_repeat 00:04:22.842 ************************************ 00:04:22.842 16:17:17 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:22.842 16:17:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.842 16:17:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.842 16:17:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:22.842 16:17:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.842 16:17:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:22.842 16:17:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:22.842 16:17:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:22.842 16:17:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3604678 00:04:22.842 16:17:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.842 16:17:17 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:22.843 16:17:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3604678' 00:04:22.843 Process app_repeat pid: 3604678 00:04:22.843 16:17:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:22.843 16:17:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:22.843 spdk_app_start Round 0 00:04:22.843 16:17:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3604678 /var/tmp/spdk-nbd.sock 00:04:22.843 16:17:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3604678 ']' 00:04:22.843 16:17:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.843 16:17:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.843 16:17:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.843 16:17:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.843 16:17:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:22.843 [2024-12-06 16:17:17.521143] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:22.843 [2024-12-06 16:17:17.521192] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3604678 ] 00:04:23.100 [2024-12-06 16:17:17.581200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.100 [2024-12-06 16:17:17.622132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.100 [2024-12-06 16:17:17.622137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.100 16:17:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.100 16:17:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:23.100 16:17:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.357 Malloc0 00:04:23.358 16:17:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.358 Malloc1 00:04:23.615 16:17:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.615 /dev/nbd0 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.615 1+0 records in 00:04:23.615 1+0 records out 00:04:23.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230386 s, 17.8 MB/s 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.615 16:17:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.615 16:17:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.872 /dev/nbd1 00:04:23.872 16:17:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.872 16:17:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.872 1+0 records in 00:04:23.872 1+0 records out 00:04:23.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239963 s, 17.1 MB/s 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.872 16:17:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.872 16:17:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.872 16:17:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.872 16:17:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.872 16:17:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.872 16:17:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.130 { 00:04:24.130 "nbd_device": "/dev/nbd0", 00:04:24.130 "bdev_name": "Malloc0" 00:04:24.130 }, 00:04:24.130 { 00:04:24.130 "nbd_device": "/dev/nbd1", 00:04:24.130 "bdev_name": "Malloc1" 00:04:24.130 } 00:04:24.130 ]' 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.130 { 00:04:24.130 "nbd_device": "/dev/nbd0", 00:04:24.130 "bdev_name": "Malloc0" 00:04:24.130 }, 00:04:24.130 { 00:04:24.130 "nbd_device": "/dev/nbd1", 00:04:24.130 "bdev_name": "Malloc1" 00:04:24.130 } 00:04:24.130 ]' 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.130 /dev/nbd1' 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.130 /dev/nbd1' 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.130 256+0 records in 00:04:24.130 256+0 records out 00:04:24.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00986996 s, 106 MB/s 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.130 256+0 records in 00:04:24.130 256+0 records out 00:04:24.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127847 s, 82.0 MB/s 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.130 16:17:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.130 256+0 records in 00:04:24.130 256+0 records out 00:04:24.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140117 s, 74.8 MB/s 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.131 16:17:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.389 16:17:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.389 16:17:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.389 16:17:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.389 16:17:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.389 16:17:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.389 16:17:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.389 16:17:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.389 16:17:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.389 16:17:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.389 16:17:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.647 16:17:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.906 16:17:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.906 16:17:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.906 16:17:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:25.165 [2024-12-06 16:17:19.780473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.165 [2024-12-06 16:17:19.818023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.165 [2024-12-06 16:17:19.818027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.165 [2024-12-06 16:17:19.857309] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.165 [2024-12-06 16:17:19.857349] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.554 16:17:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:28.554 16:17:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:28.554 spdk_app_start Round 1 00:04:28.554 16:17:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3604678 /var/tmp/spdk-nbd.sock 00:04:28.554 16:17:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3604678 ']' 00:04:28.554 16:17:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.554 16:17:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.554 16:17:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.554 16:17:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.554 16:17:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.554 16:17:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.554 16:17:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:28.554 16:17:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.554 Malloc0 00:04:28.554 16:17:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:28.554 Malloc1 00:04:28.554 16:17:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.554 16:17:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.554 16:17:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.554 16:17:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:28.554 16:17:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.554 16:17:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:28.554 16:17:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:28.554 16:17:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.554 16:17:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:28.554 16:17:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:28.554 16:17:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:28.555 16:17:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:28.555 16:17:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:28.555 16:17:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:28.555 16:17:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.555 16:17:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:28.859 /dev/nbd0 00:04:28.859 16:17:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:28.859 16:17:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:28.859 1+0 records in 00:04:28.859 1+0 records out 00:04:28.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196239 s, 20.9 MB/s 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:28.859 16:17:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:28.859 16:17:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:28.859 16:17:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:28.859 16:17:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:28.859 /dev/nbd1 00:04:29.118 16:17:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:29.118 16:17:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:29.118 16:17:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:29.118 16:17:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:29.118 16:17:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:29.118 16:17:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:29.118 16:17:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:29.118 16:17:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:29.119 16:17:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:29.119 16:17:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:29.119 16:17:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:29.119 1+0 records in 00:04:29.119 1+0 records out 00:04:29.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231602 s, 17.7 MB/s 00:04:29.119 16:17:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:29.119 16:17:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:29.119 16:17:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:29.119 16:17:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:29.119 16:17:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:29.119 { 00:04:29.119 "nbd_device": "/dev/nbd0", 00:04:29.119 "bdev_name": "Malloc0" 00:04:29.119 }, 00:04:29.119 { 00:04:29.119 "nbd_device": "/dev/nbd1", 00:04:29.119 "bdev_name": "Malloc1" 00:04:29.119 } 00:04:29.119 ]' 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:29.119 { 00:04:29.119 "nbd_device": "/dev/nbd0", 00:04:29.119 "bdev_name": "Malloc0" 00:04:29.119 }, 00:04:29.119 { 00:04:29.119 "nbd_device": "/dev/nbd1", 00:04:29.119 "bdev_name": "Malloc1" 00:04:29.119 } 00:04:29.119 ]' 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:29.119 /dev/nbd1' 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:29.119 /dev/nbd1' 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:29.119 16:17:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:29.378 256+0 records in 00:04:29.378 256+0 records out 00:04:29.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106178 s, 98.8 MB/s 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:29.378 256+0 records in 00:04:29.378 256+0 records out 00:04:29.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134169 s, 78.2 MB/s 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:29.378 256+0 records in 00:04:29.378 256+0 records out 00:04:29.378 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141068 s, 74.3 MB/s 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.378 16:17:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:29.378 16:17:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:29.379 16:17:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:29.379 16:17:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:29.379 16:17:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.379 16:17:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.379 16:17:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:29.379 16:17:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.379 16:17:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.379 16:17:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:29.379 16:17:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:29.637 16:17:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:29.637 16:17:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:29.637 16:17:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:29.638 16:17:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:29.638 16:17:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:29.638 16:17:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:29.638 16:17:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:29.638 16:17:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:29.638 16:17:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:29.638 16:17:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:29.638 16:17:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:29.896 16:17:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:29.896 16:17:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:30.155 16:17:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:30.155 [2024-12-06 16:17:24.859139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.414 [2024-12-06 16:17:24.893726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.414 [2024-12-06 16:17:24.893729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.414 [2024-12-06 16:17:24.934258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:30.414 [2024-12-06 16:17:24.934299] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:33.701 16:17:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:33.701 16:17:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:33.701 spdk_app_start Round 2 00:04:33.701 16:17:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3604678 /var/tmp/spdk-nbd.sock 00:04:33.701 16:17:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3604678 ']' 00:04:33.701 16:17:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:33.701 16:17:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.701 16:17:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:33.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:33.701 16:17:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.701 16:17:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:33.701 16:17:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.701 16:17:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:33.701 16:17:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.701 Malloc0 00:04:33.701 16:17:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:33.701 Malloc1 00:04:33.701 16:17:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.701 16:17:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.961 /dev/nbd0 00:04:33.961 16:17:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.961 16:17:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.961 1+0 records in 00:04:33.961 1+0 records out 00:04:33.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187568 s, 21.8 MB/s 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:33.961 16:17:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.961 16:17:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.961 16:17:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:33.961 /dev/nbd1 00:04:33.961 16:17:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:33.961 16:17:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:33.961 16:17:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:34.220 16:17:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:34.220 16:17:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:34.220 16:17:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:34.220 16:17:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:34.220 1+0 records in 00:04:34.220 1+0 records out 00:04:34.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235946 s, 17.4 MB/s 00:04:34.220 16:17:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:34.220 16:17:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:34.220 16:17:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:34.220 16:17:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:34.220 16:17:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:34.220 { 00:04:34.220 "nbd_device": "/dev/nbd0", 00:04:34.220 "bdev_name": "Malloc0" 00:04:34.220 }, 00:04:34.220 { 00:04:34.220 "nbd_device": "/dev/nbd1", 00:04:34.220 "bdev_name": "Malloc1" 00:04:34.220 } 00:04:34.220 ]' 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.220 { 00:04:34.220 "nbd_device": "/dev/nbd0", 00:04:34.220 "bdev_name": "Malloc0" 00:04:34.220 }, 00:04:34.220 { 00:04:34.220 "nbd_device": "/dev/nbd1", 00:04:34.220 "bdev_name": "Malloc1" 00:04:34.220 } 00:04:34.220 ]' 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.220 /dev/nbd1' 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.220 /dev/nbd1' 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.220 16:17:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.221 16:17:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.221 256+0 records in 00:04:34.221 256+0 records out 00:04:34.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396045 s, 265 MB/s 00:04:34.221 16:17:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.221 16:17:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.221 256+0 records in 00:04:34.221 256+0 records out 00:04:34.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134751 s, 77.8 MB/s 00:04:34.221 16:17:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.221 16:17:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.480 256+0 records in 00:04:34.480 256+0 records out 00:04:34.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136735 s, 76.7 MB/s 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.480 16:17:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:34.480 16:17:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:34.480 16:17:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:34.480 16:17:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:34.480 16:17:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.480 16:17:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.480 16:17:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.480 16:17:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.480 16:17:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.480 16:17:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.480 16:17:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.739 16:17:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:34.998 16:17:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:34.998 16:17:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.257 16:17:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.257 [2024-12-06 16:17:29.916958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.257 [2024-12-06 16:17:29.950878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.257 [2024-12-06 16:17:29.950881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.516 [2024-12-06 16:17:29.990904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:35.516 [2024-12-06 16:17:29.990939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.047 16:17:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3604678 /var/tmp/spdk-nbd.sock 00:04:38.047 16:17:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3604678 ']' 00:04:38.047 16:17:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.047 16:17:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.047 16:17:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.047 16:17:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.047 16:17:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:38.306 16:17:32 event.app_repeat -- event/event.sh@39 -- # killprocess 3604678 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3604678 ']' 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3604678 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3604678 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3604678' 00:04:38.306 killing process with pid 3604678 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3604678 00:04:38.306 16:17:32 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3604678 00:04:38.565 spdk_app_start is called in Round 0. 00:04:38.565 Shutdown signal received, stop current app iteration 00:04:38.565 Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 reinitialization... 00:04:38.565 spdk_app_start is called in Round 1. 00:04:38.565 Shutdown signal received, stop current app iteration 00:04:38.565 Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 reinitialization... 00:04:38.565 spdk_app_start is called in Round 2. 00:04:38.565 Shutdown signal received, stop current app iteration 00:04:38.565 Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 reinitialization... 00:04:38.565 spdk_app_start is called in Round 3. 00:04:38.565 Shutdown signal received, stop current app iteration 00:04:38.565 16:17:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:38.565 16:17:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:38.565 00:04:38.565 real 0m15.646s 00:04:38.565 user 0m34.008s 00:04:38.565 sys 0m2.414s 00:04:38.565 16:17:33 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.565 16:17:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.565 ************************************ 00:04:38.565 END TEST app_repeat 00:04:38.565 ************************************ 00:04:38.565 16:17:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:38.565 16:17:33 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:38.565 16:17:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.565 16:17:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.565 16:17:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.565 ************************************ 00:04:38.565 START TEST cpu_locks 00:04:38.565 ************************************ 00:04:38.565 16:17:33 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:38.566 * Looking for test storage... 00:04:38.566 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:38.566 16:17:33 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.566 16:17:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.566 16:17:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.824 16:17:33 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.824 16:17:33 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.825 16:17:33 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.825 16:17:33 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:38.825 16:17:33 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.825 16:17:33 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.825 --rc genhtml_branch_coverage=1 00:04:38.825 --rc genhtml_function_coverage=1 00:04:38.825 --rc genhtml_legend=1 00:04:38.825 --rc geninfo_all_blocks=1 00:04:38.825 --rc geninfo_unexecuted_blocks=1 00:04:38.825 00:04:38.825 ' 00:04:38.825 16:17:33 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.825 --rc genhtml_branch_coverage=1 00:04:38.825 --rc genhtml_function_coverage=1 00:04:38.825 --rc genhtml_legend=1 00:04:38.825 --rc geninfo_all_blocks=1 00:04:38.825 --rc geninfo_unexecuted_blocks=1 00:04:38.825 00:04:38.825 ' 00:04:38.825 16:17:33 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.825 --rc genhtml_branch_coverage=1 00:04:38.825 --rc genhtml_function_coverage=1 00:04:38.825 --rc genhtml_legend=1 00:04:38.825 --rc geninfo_all_blocks=1 00:04:38.825 --rc geninfo_unexecuted_blocks=1 00:04:38.825 00:04:38.825 ' 00:04:38.825 16:17:33 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.825 --rc genhtml_branch_coverage=1 00:04:38.825 --rc genhtml_function_coverage=1 00:04:38.825 --rc genhtml_legend=1 00:04:38.825 --rc geninfo_all_blocks=1 00:04:38.825 --rc geninfo_unexecuted_blocks=1 00:04:38.825 00:04:38.825 ' 00:04:38.825 16:17:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:38.825 16:17:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:38.825 16:17:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:38.825 16:17:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:38.825 16:17:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.825 16:17:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.825 16:17:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.825 ************************************ 00:04:38.825 START TEST default_locks 00:04:38.825 ************************************ 00:04:38.825 16:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:38.825 16:17:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3607814 00:04:38.825 16:17:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3607814 00:04:38.825 16:17:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.825 16:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3607814 ']' 00:04:38.825 16:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.825 16:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.825 16:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.825 16:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.825 16:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.825 [2024-12-06 16:17:33.455152] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:38.825 [2024-12-06 16:17:33.455188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607814 ] 00:04:38.825 [2024-12-06 16:17:33.510537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.825 [2024-12-06 16:17:33.549366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.083 16:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.083 16:17:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:39.083 16:17:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3607814 00:04:39.083 16:17:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3607814 00:04:39.083 16:17:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.648 lslocks: write error 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3607814 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3607814 ']' 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3607814 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3607814 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3607814' 00:04:39.648 killing process with pid 3607814 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3607814 00:04:39.648 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3607814 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3607814 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3607814 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3607814 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3607814 ']' 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.906 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3607814) - No such process 00:04:39.906 ERROR: process (pid: 3607814) is no longer running 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:39.906 00:04:39.906 real 0m1.046s 00:04:39.906 user 0m0.995s 00:04:39.906 sys 0m0.475s 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.906 16:17:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.906 ************************************ 00:04:39.906 END TEST default_locks 00:04:39.906 ************************************ 00:04:39.906 16:17:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:39.906 16:17:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.906 16:17:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.906 16:17:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.906 ************************************ 00:04:39.906 START TEST default_locks_via_rpc 00:04:39.906 ************************************ 00:04:39.906 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:39.906 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3608102 00:04:39.906 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3608102 00:04:39.906 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.906 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3608102 ']' 00:04:39.906 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.906 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.906 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.906 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.906 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.907 [2024-12-06 16:17:34.571184] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:39.907 [2024-12-06 16:17:34.571226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608102 ] 00:04:39.907 [2024-12-06 16:17:34.628171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.164 [2024-12-06 16:17:34.664366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3608102 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3608102 00:04:40.165 16:17:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3608102 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3608102 ']' 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3608102 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608102 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608102' 00:04:40.731 killing process with pid 3608102 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3608102 00:04:40.731 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3608102 00:04:40.989 00:04:40.989 real 0m1.168s 00:04:40.989 user 0m1.120s 00:04:40.989 sys 0m0.520s 00:04:40.989 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.989 16:17:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.989 ************************************ 00:04:40.989 END TEST default_locks_via_rpc 00:04:40.989 ************************************ 00:04:41.249 16:17:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:41.249 16:17:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.249 16:17:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.249 16:17:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.249 ************************************ 00:04:41.249 START TEST non_locking_app_on_locked_coremask 00:04:41.249 ************************************ 00:04:41.249 16:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:41.249 16:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3608390 00:04:41.249 16:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3608390 /var/tmp/spdk.sock 00:04:41.249 16:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3608390 ']' 00:04:41.249 16:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.249 16:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.249 16:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.249 16:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.249 16:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.249 16:17:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.249 [2024-12-06 16:17:35.798698] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:41.249 [2024-12-06 16:17:35.798736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608390 ] 00:04:41.249 [2024-12-06 16:17:35.856579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.249 [2024-12-06 16:17:35.895473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3608393 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3608393 /var/tmp/spdk2.sock 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3608393 ']' 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.508 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:41.508 [2024-12-06 16:17:36.146634] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:41.508 [2024-12-06 16:17:36.146681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608393 ] 00:04:41.508 [2024-12-06 16:17:36.228428] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:41.508 [2024-12-06 16:17:36.228452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.767 [2024-12-06 16:17:36.302623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.334 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.334 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:42.334 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3608390 00:04:42.334 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3608390 00:04:42.334 16:17:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.903 lslocks: write error 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3608390 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3608390 ']' 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3608390 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608390 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608390' 00:04:42.903 killing process with pid 3608390 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3608390 00:04:42.903 16:17:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3608390 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3608393 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3608393 ']' 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3608393 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608393 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608393' 00:04:43.472 killing process with pid 3608393 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3608393 00:04:43.472 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3608393 00:04:44.041 00:04:44.041 real 0m2.718s 00:04:44.041 user 0m2.858s 00:04:44.041 sys 0m0.871s 00:04:44.041 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.041 16:17:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.041 ************************************ 00:04:44.041 END TEST non_locking_app_on_locked_coremask 00:04:44.041 ************************************ 00:04:44.041 16:17:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:44.041 16:17:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.041 16:17:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.041 16:17:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.041 ************************************ 00:04:44.041 START TEST locking_app_on_unlocked_coremask 00:04:44.041 ************************************ 00:04:44.041 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:44.041 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3608915 00:04:44.041 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3608915 /var/tmp/spdk.sock 00:04:44.041 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3608915 ']' 00:04:44.041 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.041 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.041 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.041 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.041 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.041 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:44.041 [2024-12-06 16:17:38.579133] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:44.041 [2024-12-06 16:17:38.579171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608915 ] 00:04:44.041 [2024-12-06 16:17:38.635667] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:44.041 [2024-12-06 16:17:38.635706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.041 [2024-12-06 16:17:38.674775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3608962 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3608962 /var/tmp/spdk2.sock 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3608962 ']' 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.301 16:17:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:44.301 [2024-12-06 16:17:38.931792] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:44.301 [2024-12-06 16:17:38.931840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608962 ] 00:04:44.301 [2024-12-06 16:17:39.015119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.565 [2024-12-06 16:17:39.089288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.136 16:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.136 16:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:45.136 16:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3608962 00:04:45.136 16:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3608962 00:04:45.136 16:17:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.706 lslocks: write error 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3608915 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3608915 ']' 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3608915 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608915 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608915' 00:04:45.706 killing process with pid 3608915 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3608915 00:04:45.706 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3608915 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3608962 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3608962 ']' 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3608962 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608962 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608962' 00:04:46.275 killing process with pid 3608962 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3608962 00:04:46.275 16:17:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3608962 00:04:46.535 00:04:46.535 real 0m2.587s 00:04:46.535 user 0m2.678s 00:04:46.535 sys 0m0.853s 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.535 ************************************ 00:04:46.535 END TEST locking_app_on_unlocked_coremask 00:04:46.535 ************************************ 00:04:46.535 16:17:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:46.535 16:17:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.535 16:17:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.535 16:17:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.535 ************************************ 00:04:46.535 START TEST locking_app_on_locked_coremask 00:04:46.535 ************************************ 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3609366 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3609366 /var/tmp/spdk.sock 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3609366 ']' 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.535 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.535 [2024-12-06 16:17:41.237873] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:46.535 [2024-12-06 16:17:41.237915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609366 ] 00:04:46.794 [2024-12-06 16:17:41.297198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.794 [2024-12-06 16:17:41.334524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3609521 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3609521 /var/tmp/spdk2.sock 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3609521 /var/tmp/spdk2.sock 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3609521 /var/tmp/spdk2.sock 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3609521 ']' 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.053 16:17:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.053 [2024-12-06 16:17:41.593168] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:47.053 [2024-12-06 16:17:41.593208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609521 ] 00:04:47.053 [2024-12-06 16:17:41.673175] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3609366 has claimed it. 00:04:47.053 [2024-12-06 16:17:41.673214] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:47.622 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3609521) - No such process 00:04:47.622 ERROR: process (pid: 3609521) is no longer running 00:04:47.622 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.622 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:47.622 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:47.622 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:47.622 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:47.622 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:47.622 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3609366 00:04:47.622 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3609366 00:04:47.622 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.881 lslocks: write error 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3609366 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3609366 ']' 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3609366 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609366 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609366' 00:04:47.881 killing process with pid 3609366 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3609366 00:04:47.881 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3609366 00:04:48.140 00:04:48.140 real 0m1.532s 00:04:48.140 user 0m1.643s 00:04:48.140 sys 0m0.479s 00:04:48.140 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.140 16:17:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.140 ************************************ 00:04:48.140 END TEST locking_app_on_locked_coremask 00:04:48.140 ************************************ 00:04:48.140 16:17:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:48.140 16:17:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.140 16:17:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.140 16:17:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.140 ************************************ 00:04:48.140 START TEST locking_overlapped_coremask 00:04:48.140 ************************************ 00:04:48.140 16:17:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:48.140 16:17:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3609782 00:04:48.140 16:17:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3609782 /var/tmp/spdk.sock 00:04:48.140 16:17:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3609782 ']' 00:04:48.140 16:17:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.140 16:17:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.140 16:17:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.140 16:17:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.140 16:17:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.140 16:17:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:48.140 [2024-12-06 16:17:42.830139] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:48.140 [2024-12-06 16:17:42.830177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609782 ] 00:04:48.398 [2024-12-06 16:17:42.887345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.398 [2024-12-06 16:17:42.928861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.398 [2024-12-06 16:17:42.928947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.398 [2024-12-06 16:17:42.928957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3609819 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3609819 /var/tmp/spdk2.sock 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3609819 /var/tmp/spdk2.sock 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3609819 /var/tmp/spdk2.sock 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3609819 ']' 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.658 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.658 [2024-12-06 16:17:43.178154] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:48.658 [2024-12-06 16:17:43.178197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609819 ] 00:04:48.658 [2024-12-06 16:17:43.262409] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3609782 has claimed it. 00:04:48.658 [2024-12-06 16:17:43.262446] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:49.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3609819) - No such process 00:04:49.227 ERROR: process (pid: 3609819) is no longer running 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3609782 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3609782 ']' 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3609782 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609782 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609782' 00:04:49.227 killing process with pid 3609782 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3609782 00:04:49.227 16:17:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3609782 00:04:49.487 00:04:49.487 real 0m1.361s 00:04:49.487 user 0m3.759s 00:04:49.487 sys 0m0.370s 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.487 ************************************ 00:04:49.487 END TEST locking_overlapped_coremask 00:04:49.487 ************************************ 00:04:49.487 16:17:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:49.487 16:17:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.487 16:17:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.487 16:17:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.487 ************************************ 00:04:49.487 START TEST locking_overlapped_coremask_via_rpc 00:04:49.487 ************************************ 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3610084 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3610084 /var/tmp/spdk.sock 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3610084 ']' 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.487 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:49.748 [2024-12-06 16:17:44.251066] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:49.748 [2024-12-06 16:17:44.251104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610084 ] 00:04:49.748 [2024-12-06 16:17:44.308001] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:49.748 [2024-12-06 16:17:44.308024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:49.748 [2024-12-06 16:17:44.349429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.748 [2024-12-06 16:17:44.349458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.748 [2024-12-06 16:17:44.349460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3610117 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3610117 /var/tmp/spdk2.sock 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3610117 ']' 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.008 16:17:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:50.008 [2024-12-06 16:17:44.604463] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:50.008 [2024-12-06 16:17:44.604509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610117 ] 00:04:50.008 [2024-12-06 16:17:44.686811] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.008 [2024-12-06 16:17:44.686834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.267 [2024-12-06 16:17:44.767605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.267 [2024-12-06 16:17:44.771460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:50.267 [2024-12-06 16:17:44.771461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.834 [2024-12-06 16:17:45.431444] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3610084 has claimed it. 00:04:50.834 request: 00:04:50.834 { 00:04:50.834 "method": "framework_enable_cpumask_locks", 00:04:50.834 "req_id": 1 00:04:50.834 } 00:04:50.834 Got JSON-RPC error response 00:04:50.834 response: 00:04:50.834 { 00:04:50.834 "code": -32603, 00:04:50.834 "message": "Failed to claim CPU core: 2" 00:04:50.834 } 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3610084 /var/tmp/spdk.sock 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3610084 ']' 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.834 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.091 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3610117 /var/tmp/spdk2.sock 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3610117 ']' 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:51.092 00:04:51.092 real 0m1.616s 00:04:51.092 user 0m0.746s 00:04:51.092 sys 0m0.126s 00:04:51.092 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.349 16:17:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.349 ************************************ 00:04:51.349 END TEST locking_overlapped_coremask_via_rpc 00:04:51.349 ************************************ 00:04:51.350 16:17:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:51.350 16:17:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3610084 ]] 00:04:51.350 16:17:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3610084 00:04:51.350 16:17:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3610084 ']' 00:04:51.350 16:17:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3610084 00:04:51.350 16:17:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:51.350 16:17:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.350 16:17:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3610084 00:04:51.350 16:17:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.350 16:17:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.350 16:17:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3610084' 00:04:51.350 killing process with pid 3610084 00:04:51.350 16:17:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3610084 00:04:51.350 16:17:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3610084 00:04:51.608 16:17:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3610117 ]] 00:04:51.608 16:17:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3610117 00:04:51.608 16:17:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3610117 ']' 00:04:51.608 16:17:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3610117 00:04:51.608 16:17:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:51.608 16:17:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.608 16:17:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3610117 00:04:51.608 16:17:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:51.608 16:17:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:51.608 16:17:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3610117' 00:04:51.608 killing process with pid 3610117 00:04:51.608 16:17:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3610117 00:04:51.608 16:17:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3610117 00:04:51.867 16:17:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:51.867 16:17:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:51.867 16:17:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3610084 ]] 00:04:51.867 16:17:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3610084 00:04:51.867 16:17:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3610084 ']' 00:04:51.867 16:17:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3610084 00:04:51.867 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3610084) - No such process 00:04:51.867 16:17:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3610084 is not found' 00:04:51.867 Process with pid 3610084 is not found 00:04:51.867 16:17:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3610117 ]] 00:04:51.867 16:17:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3610117 00:04:51.867 16:17:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3610117 ']' 00:04:51.867 16:17:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3610117 00:04:51.867 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3610117) - No such process 00:04:51.867 16:17:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3610117 is not found' 00:04:51.867 Process with pid 3610117 is not found 00:04:51.867 16:17:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:51.867 00:04:51.867 real 0m13.373s 00:04:51.867 user 0m23.181s 00:04:51.867 sys 0m4.631s 00:04:51.867 16:17:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.867 16:17:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.867 ************************************ 00:04:51.867 END TEST cpu_locks 00:04:51.867 ************************************ 00:04:52.126 00:04:52.126 real 0m36.834s 00:04:52.126 user 1m9.396s 00:04:52.126 sys 0m7.957s 00:04:52.126 16:17:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.126 16:17:46 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.126 ************************************ 00:04:52.126 END TEST event 00:04:52.126 ************************************ 00:04:52.126 16:17:46 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:04:52.126 16:17:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.126 16:17:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.126 16:17:46 -- common/autotest_common.sh@10 -- # set +x 00:04:52.126 ************************************ 00:04:52.126 START TEST thread 00:04:52.126 ************************************ 00:04:52.126 16:17:46 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:04:52.126 * Looking for test storage... 00:04:52.126 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:04:52.126 16:17:46 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:52.126 16:17:46 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:04:52.126 16:17:46 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.126 16:17:46 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.126 16:17:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.126 16:17:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.126 16:17:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.126 16:17:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.126 16:17:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.126 16:17:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.126 16:17:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.126 16:17:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.126 16:17:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.126 16:17:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.126 16:17:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.126 16:17:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:52.126 16:17:46 thread -- scripts/common.sh@345 -- # : 1 00:04:52.126 16:17:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.126 16:17:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.126 16:17:46 thread -- scripts/common.sh@365 -- # decimal 1 00:04:52.126 16:17:46 thread -- scripts/common.sh@353 -- # local d=1 00:04:52.126 16:17:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.126 16:17:46 thread -- scripts/common.sh@355 -- # echo 1 00:04:52.126 16:17:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.126 16:17:46 thread -- scripts/common.sh@366 -- # decimal 2 00:04:52.126 16:17:46 thread -- scripts/common.sh@353 -- # local d=2 00:04:52.126 16:17:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.126 16:17:46 thread -- scripts/common.sh@355 -- # echo 2 00:04:52.126 16:17:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.126 16:17:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.126 16:17:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.126 16:17:46 thread -- scripts/common.sh@368 -- # return 0 00:04:52.126 16:17:46 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.126 16:17:46 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.126 --rc genhtml_branch_coverage=1 00:04:52.126 --rc genhtml_function_coverage=1 00:04:52.126 --rc genhtml_legend=1 00:04:52.126 --rc geninfo_all_blocks=1 00:04:52.126 --rc geninfo_unexecuted_blocks=1 00:04:52.126 00:04:52.126 ' 00:04:52.126 16:17:46 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.126 --rc genhtml_branch_coverage=1 00:04:52.126 --rc genhtml_function_coverage=1 00:04:52.126 --rc genhtml_legend=1 00:04:52.127 --rc geninfo_all_blocks=1 00:04:52.127 --rc geninfo_unexecuted_blocks=1 00:04:52.127 00:04:52.127 ' 00:04:52.127 16:17:46 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.127 --rc genhtml_branch_coverage=1 00:04:52.127 --rc genhtml_function_coverage=1 00:04:52.127 --rc genhtml_legend=1 00:04:52.127 --rc geninfo_all_blocks=1 00:04:52.127 --rc geninfo_unexecuted_blocks=1 00:04:52.127 00:04:52.127 ' 00:04:52.127 16:17:46 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.127 --rc genhtml_branch_coverage=1 00:04:52.127 --rc genhtml_function_coverage=1 00:04:52.127 --rc genhtml_legend=1 00:04:52.127 --rc geninfo_all_blocks=1 00:04:52.127 --rc geninfo_unexecuted_blocks=1 00:04:52.127 00:04:52.127 ' 00:04:52.127 16:17:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:52.127 16:17:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:52.127 16:17:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.127 16:17:46 thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.127 ************************************ 00:04:52.127 START TEST thread_poller_perf 00:04:52.127 ************************************ 00:04:52.127 16:17:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:52.385 [2024-12-06 16:17:46.854029] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:52.385 [2024-12-06 16:17:46.854124] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610624 ] 00:04:52.385 [2024-12-06 16:17:46.917106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.385 [2024-12-06 16:17:46.954391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.385 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:53.318 [2024-12-06T15:17:48.046Z] ====================================== 00:04:53.318 [2024-12-06T15:17:48.046Z] busy:2709217214 (cyc) 00:04:53.318 [2024-12-06T15:17:48.046Z] total_run_count: 458000 00:04:53.318 [2024-12-06T15:17:48.046Z] tsc_hz: 2700000000 (cyc) 00:04:53.318 [2024-12-06T15:17:48.046Z] ====================================== 00:04:53.318 [2024-12-06T15:17:48.046Z] poller_cost: 5915 (cyc), 2190 (nsec) 00:04:53.318 00:04:53.318 real 0m1.162s 00:04:53.318 user 0m1.096s 00:04:53.318 sys 0m0.063s 00:04:53.318 16:17:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.318 16:17:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.318 ************************************ 00:04:53.318 END TEST thread_poller_perf 00:04:53.318 ************************************ 00:04:53.318 16:17:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:53.318 16:17:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:53.318 16:17:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.318 16:17:48 thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.576 ************************************ 00:04:53.576 START TEST thread_poller_perf 00:04:53.576 ************************************ 00:04:53.576 16:17:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:53.576 [2024-12-06 16:17:48.080222] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:53.576 [2024-12-06 16:17:48.080295] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610807 ] 00:04:53.576 [2024-12-06 16:17:48.142504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.576 [2024-12-06 16:17:48.179348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.576 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:54.509 [2024-12-06T15:17:49.237Z] ====================================== 00:04:54.509 [2024-12-06T15:17:49.237Z] busy:2701587330 (cyc) 00:04:54.509 [2024-12-06T15:17:49.237Z] total_run_count: 5605000 00:04:54.509 [2024-12-06T15:17:49.237Z] tsc_hz: 2700000000 (cyc) 00:04:54.509 [2024-12-06T15:17:49.237Z] ====================================== 00:04:54.509 [2024-12-06T15:17:49.237Z] poller_cost: 481 (cyc), 178 (nsec) 00:04:54.509 00:04:54.509 real 0m1.155s 00:04:54.509 user 0m1.087s 00:04:54.509 sys 0m0.065s 00:04:54.509 16:17:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.509 16:17:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.509 ************************************ 00:04:54.509 END TEST thread_poller_perf 00:04:54.509 ************************************ 00:04:54.830 16:17:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:54.830 00:04:54.830 real 0m2.589s 00:04:54.830 user 0m2.316s 00:04:54.830 sys 0m0.282s 00:04:54.830 16:17:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.830 16:17:49 thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.830 ************************************ 00:04:54.830 END TEST thread 00:04:54.830 ************************************ 00:04:54.830 16:17:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:54.830 16:17:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:04:54.830 16:17:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.830 16:17:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.830 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:04:54.830 ************************************ 00:04:54.830 START TEST app_cmdline 00:04:54.830 ************************************ 00:04:54.830 16:17:49 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:04:54.830 * Looking for test storage... 00:04:54.830 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:04:54.830 16:17:49 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.830 16:17:49 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.830 16:17:49 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.830 16:17:49 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.830 16:17:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.830 16:17:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.830 16:17:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.830 16:17:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.830 16:17:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.830 16:17:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.831 16:17:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.831 --rc genhtml_branch_coverage=1 00:04:54.831 --rc genhtml_function_coverage=1 00:04:54.831 --rc genhtml_legend=1 00:04:54.831 --rc geninfo_all_blocks=1 00:04:54.831 --rc geninfo_unexecuted_blocks=1 00:04:54.831 00:04:54.831 ' 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.831 --rc genhtml_branch_coverage=1 00:04:54.831 --rc genhtml_function_coverage=1 00:04:54.831 --rc genhtml_legend=1 00:04:54.831 --rc geninfo_all_blocks=1 00:04:54.831 --rc geninfo_unexecuted_blocks=1 00:04:54.831 00:04:54.831 ' 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.831 --rc genhtml_branch_coverage=1 00:04:54.831 --rc genhtml_function_coverage=1 00:04:54.831 --rc genhtml_legend=1 00:04:54.831 --rc geninfo_all_blocks=1 00:04:54.831 --rc geninfo_unexecuted_blocks=1 00:04:54.831 00:04:54.831 ' 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.831 --rc genhtml_branch_coverage=1 00:04:54.831 --rc genhtml_function_coverage=1 00:04:54.831 --rc genhtml_legend=1 00:04:54.831 --rc geninfo_all_blocks=1 00:04:54.831 --rc geninfo_unexecuted_blocks=1 00:04:54.831 00:04:54.831 ' 00:04:54.831 16:17:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:54.831 16:17:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3611170 00:04:54.831 16:17:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3611170 00:04:54.831 16:17:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3611170 ']' 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.831 16:17:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:55.090 [2024-12-06 16:17:49.502144] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:04:55.090 [2024-12-06 16:17:49.502211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3611170 ] 00:04:55.090 [2024-12-06 16:17:49.560536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.090 [2024-12-06 16:17:49.599519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.090 16:17:49 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.090 16:17:49 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:55.090 16:17:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:55.349 { 00:04:55.349 "version": "SPDK v25.01-pre git sha1 f9a92382f", 00:04:55.349 "fields": { 00:04:55.349 "major": 25, 00:04:55.349 "minor": 1, 00:04:55.349 "patch": 0, 00:04:55.349 "suffix": "-pre", 00:04:55.349 "commit": "f9a92382f" 00:04:55.349 } 00:04:55.349 } 00:04:55.349 16:17:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:55.349 16:17:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:55.349 16:17:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:55.349 16:17:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:55.349 16:17:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:55.349 16:17:49 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.349 16:17:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:55.349 16:17:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:55.349 16:17:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:55.349 16:17:49 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.349 16:17:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:55.349 16:17:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:55.349 16:17:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:55.349 16:17:50 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:55.349 16:17:50 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:55.349 16:17:50 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:04:55.349 16:17:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.349 16:17:50 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:04:55.349 16:17:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.349 16:17:50 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:04:55.349 16:17:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.350 16:17:50 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:04:55.350 16:17:50 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:04:55.350 16:17:50 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:55.608 request: 00:04:55.608 { 00:04:55.608 "method": "env_dpdk_get_mem_stats", 00:04:55.608 "req_id": 1 00:04:55.608 } 00:04:55.608 Got JSON-RPC error response 00:04:55.608 response: 00:04:55.608 { 00:04:55.608 "code": -32601, 00:04:55.608 "message": "Method not found" 00:04:55.608 } 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.608 16:17:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3611170 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3611170 ']' 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3611170 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3611170 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3611170' 00:04:55.608 killing process with pid 3611170 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@973 -- # kill 3611170 00:04:55.608 16:17:50 app_cmdline -- common/autotest_common.sh@978 -- # wait 3611170 00:04:55.867 00:04:55.867 real 0m1.247s 00:04:55.867 user 0m1.427s 00:04:55.867 sys 0m0.415s 00:04:55.867 16:17:50 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.867 16:17:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:55.867 ************************************ 00:04:55.867 END TEST app_cmdline 00:04:55.867 ************************************ 00:04:55.867 16:17:50 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:04:55.867 16:17:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.867 16:17:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.867 16:17:50 -- common/autotest_common.sh@10 -- # set +x 00:04:56.125 ************************************ 00:04:56.125 START TEST version 00:04:56.125 ************************************ 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:04:56.125 * Looking for test storage... 00:04:56.125 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1711 -- # lcov --version 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:56.125 16:17:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.125 16:17:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.125 16:17:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.125 16:17:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.125 16:17:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.125 16:17:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.125 16:17:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.125 16:17:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.125 16:17:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.125 16:17:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.125 16:17:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.125 16:17:50 version -- scripts/common.sh@344 -- # case "$op" in 00:04:56.125 16:17:50 version -- scripts/common.sh@345 -- # : 1 00:04:56.125 16:17:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.125 16:17:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.125 16:17:50 version -- scripts/common.sh@365 -- # decimal 1 00:04:56.125 16:17:50 version -- scripts/common.sh@353 -- # local d=1 00:04:56.125 16:17:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.125 16:17:50 version -- scripts/common.sh@355 -- # echo 1 00:04:56.125 16:17:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.125 16:17:50 version -- scripts/common.sh@366 -- # decimal 2 00:04:56.125 16:17:50 version -- scripts/common.sh@353 -- # local d=2 00:04:56.125 16:17:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.125 16:17:50 version -- scripts/common.sh@355 -- # echo 2 00:04:56.125 16:17:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.125 16:17:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.125 16:17:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.125 16:17:50 version -- scripts/common.sh@368 -- # return 0 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:56.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.125 --rc genhtml_branch_coverage=1 00:04:56.125 --rc genhtml_function_coverage=1 00:04:56.125 --rc genhtml_legend=1 00:04:56.125 --rc geninfo_all_blocks=1 00:04:56.125 --rc geninfo_unexecuted_blocks=1 00:04:56.125 00:04:56.125 ' 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:56.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.125 --rc genhtml_branch_coverage=1 00:04:56.125 --rc genhtml_function_coverage=1 00:04:56.125 --rc genhtml_legend=1 00:04:56.125 --rc geninfo_all_blocks=1 00:04:56.125 --rc geninfo_unexecuted_blocks=1 00:04:56.125 00:04:56.125 ' 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:56.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.125 --rc genhtml_branch_coverage=1 00:04:56.125 --rc genhtml_function_coverage=1 00:04:56.125 --rc genhtml_legend=1 00:04:56.125 --rc geninfo_all_blocks=1 00:04:56.125 --rc geninfo_unexecuted_blocks=1 00:04:56.125 00:04:56.125 ' 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:56.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.125 --rc genhtml_branch_coverage=1 00:04:56.125 --rc genhtml_function_coverage=1 00:04:56.125 --rc genhtml_legend=1 00:04:56.125 --rc geninfo_all_blocks=1 00:04:56.125 --rc geninfo_unexecuted_blocks=1 00:04:56.125 00:04:56.125 ' 00:04:56.125 16:17:50 version -- app/version.sh@17 -- # get_header_version major 00:04:56.125 16:17:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:04:56.125 16:17:50 version -- app/version.sh@14 -- # cut -f2 00:04:56.125 16:17:50 version -- app/version.sh@14 -- # tr -d '"' 00:04:56.125 16:17:50 version -- app/version.sh@17 -- # major=25 00:04:56.125 16:17:50 version -- app/version.sh@18 -- # get_header_version minor 00:04:56.125 16:17:50 version -- app/version.sh@14 -- # tr -d '"' 00:04:56.125 16:17:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:04:56.125 16:17:50 version -- app/version.sh@14 -- # cut -f2 00:04:56.125 16:17:50 version -- app/version.sh@18 -- # minor=1 00:04:56.125 16:17:50 version -- app/version.sh@19 -- # get_header_version patch 00:04:56.125 16:17:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:04:56.125 16:17:50 version -- app/version.sh@14 -- # cut -f2 00:04:56.125 16:17:50 version -- app/version.sh@14 -- # tr -d '"' 00:04:56.125 16:17:50 version -- app/version.sh@19 -- # patch=0 00:04:56.125 16:17:50 version -- app/version.sh@20 -- # get_header_version suffix 00:04:56.125 16:17:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:04:56.125 16:17:50 version -- app/version.sh@14 -- # tr -d '"' 00:04:56.125 16:17:50 version -- app/version.sh@14 -- # cut -f2 00:04:56.125 16:17:50 version -- app/version.sh@20 -- # suffix=-pre 00:04:56.125 16:17:50 version -- app/version.sh@22 -- # version=25.1 00:04:56.125 16:17:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:56.125 16:17:50 version -- app/version.sh@28 -- # version=25.1rc0 00:04:56.125 16:17:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:04:56.125 16:17:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:56.125 16:17:50 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:56.125 16:17:50 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:56.125 00:04:56.125 real 0m0.226s 00:04:56.125 user 0m0.147s 00:04:56.125 sys 0m0.117s 00:04:56.125 16:17:50 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.125 16:17:50 version -- common/autotest_common.sh@10 -- # set +x 00:04:56.125 ************************************ 00:04:56.125 END TEST version 00:04:56.125 ************************************ 00:04:56.384 16:17:50 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:56.384 16:17:50 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:56.384 16:17:50 -- spdk/autotest.sh@194 -- # uname -s 00:04:56.384 16:17:50 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:56.384 16:17:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:56.384 16:17:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:56.384 16:17:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:56.384 16:17:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:56.384 16:17:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:56.384 16:17:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.384 16:17:50 -- common/autotest_common.sh@10 -- # set +x 00:04:56.384 16:17:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:56.384 16:17:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:56.384 16:17:50 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:56.384 16:17:50 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:56.384 16:17:50 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:04:56.384 16:17:50 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:04:56.384 16:17:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:56.384 16:17:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.384 16:17:50 -- common/autotest_common.sh@10 -- # set +x 00:04:56.384 ************************************ 00:04:56.384 START TEST nvmf_rdma 00:04:56.384 ************************************ 00:04:56.384 16:17:50 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:04:56.384 * Looking for test storage... 00:04:56.384 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:04:56.384 16:17:51 nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:56.384 16:17:51 nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:04:56.384 16:17:51 nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:56.384 16:17:51 nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.384 16:17:51 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:04:56.385 16:17:51 nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.385 16:17:51 nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.385 --rc genhtml_branch_coverage=1 00:04:56.385 --rc genhtml_function_coverage=1 00:04:56.385 --rc genhtml_legend=1 00:04:56.385 --rc geninfo_all_blocks=1 00:04:56.385 --rc geninfo_unexecuted_blocks=1 00:04:56.385 00:04:56.385 ' 00:04:56.385 16:17:51 nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.385 --rc genhtml_branch_coverage=1 00:04:56.385 --rc genhtml_function_coverage=1 00:04:56.385 --rc genhtml_legend=1 00:04:56.385 --rc geninfo_all_blocks=1 00:04:56.385 --rc geninfo_unexecuted_blocks=1 00:04:56.385 00:04:56.385 ' 00:04:56.385 16:17:51 nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.385 --rc genhtml_branch_coverage=1 00:04:56.385 --rc genhtml_function_coverage=1 00:04:56.385 --rc genhtml_legend=1 00:04:56.385 --rc geninfo_all_blocks=1 00:04:56.385 --rc geninfo_unexecuted_blocks=1 00:04:56.385 00:04:56.385 ' 00:04:56.385 16:17:51 nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:56.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.385 --rc genhtml_branch_coverage=1 00:04:56.385 --rc genhtml_function_coverage=1 00:04:56.385 --rc genhtml_legend=1 00:04:56.385 --rc geninfo_all_blocks=1 00:04:56.385 --rc geninfo_unexecuted_blocks=1 00:04:56.385 00:04:56.385 ' 00:04:56.385 16:17:51 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:04:56.385 16:17:51 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:56.385 16:17:51 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:04:56.385 16:17:51 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:56.385 16:17:51 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.385 16:17:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:04:56.644 ************************************ 00:04:56.644 START TEST nvmf_target_core 00:04:56.644 ************************************ 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:04:56.644 * Looking for test storage... 00:04:56.644 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:56.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.644 --rc genhtml_branch_coverage=1 00:04:56.644 --rc genhtml_function_coverage=1 00:04:56.644 --rc genhtml_legend=1 00:04:56.644 --rc geninfo_all_blocks=1 00:04:56.644 --rc geninfo_unexecuted_blocks=1 00:04:56.644 00:04:56.644 ' 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:56.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.644 --rc genhtml_branch_coverage=1 00:04:56.644 --rc genhtml_function_coverage=1 00:04:56.644 --rc genhtml_legend=1 00:04:56.644 --rc geninfo_all_blocks=1 00:04:56.644 --rc geninfo_unexecuted_blocks=1 00:04:56.644 00:04:56.644 ' 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:56.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.644 --rc genhtml_branch_coverage=1 00:04:56.644 --rc genhtml_function_coverage=1 00:04:56.644 --rc genhtml_legend=1 00:04:56.644 --rc geninfo_all_blocks=1 00:04:56.644 --rc geninfo_unexecuted_blocks=1 00:04:56.644 00:04:56.644 ' 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:56.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.644 --rc genhtml_branch_coverage=1 00:04:56.644 --rc genhtml_function_coverage=1 00:04:56.644 --rc genhtml_legend=1 00:04:56.644 --rc geninfo_all_blocks=1 00:04:56.644 --rc geninfo_unexecuted_blocks=1 00:04:56.644 00:04:56.644 ' 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.644 16:17:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:56.645 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:56.645 ************************************ 00:04:56.645 START TEST nvmf_abort 00:04:56.645 ************************************ 00:04:56.645 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:04:56.905 * Looking for test storage... 00:04:56.905 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:56.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.905 --rc genhtml_branch_coverage=1 00:04:56.905 --rc genhtml_function_coverage=1 00:04:56.905 --rc genhtml_legend=1 00:04:56.905 --rc geninfo_all_blocks=1 00:04:56.905 --rc geninfo_unexecuted_blocks=1 00:04:56.905 00:04:56.905 ' 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:56.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.905 --rc genhtml_branch_coverage=1 00:04:56.905 --rc genhtml_function_coverage=1 00:04:56.905 --rc genhtml_legend=1 00:04:56.905 --rc geninfo_all_blocks=1 00:04:56.905 --rc geninfo_unexecuted_blocks=1 00:04:56.905 00:04:56.905 ' 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:56.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.905 --rc genhtml_branch_coverage=1 00:04:56.905 --rc genhtml_function_coverage=1 00:04:56.905 --rc genhtml_legend=1 00:04:56.905 --rc geninfo_all_blocks=1 00:04:56.905 --rc geninfo_unexecuted_blocks=1 00:04:56.905 00:04:56.905 ' 00:04:56.905 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:56.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.905 --rc genhtml_branch_coverage=1 00:04:56.905 --rc genhtml_function_coverage=1 00:04:56.905 --rc genhtml_legend=1 00:04:56.905 --rc geninfo_all_blocks=1 00:04:56.905 --rc geninfo_unexecuted_blocks=1 00:04:56.905 00:04:56.905 ' 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:56.906 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:56.906 16:17:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:02.180 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:02.180 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:02.180 Found net devices under 0000:18:00.0: mlx_0_0 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:02.180 Found net devices under 0000:18:00.1: mlx_0_1 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:02.180 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:02.181 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:02.181 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:05:02.181 altname enp24s0f0np0 00:05:02.181 altname ens785f0np0 00:05:02.181 inet 192.168.100.8/24 scope global mlx_0_0 00:05:02.181 valid_lft forever preferred_lft forever 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:02.181 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:02.181 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:05:02.181 altname enp24s0f1np1 00:05:02.181 altname ens785f1np1 00:05:02.181 inet 192.168.100.9/24 scope global mlx_0_1 00:05:02.181 valid_lft forever preferred_lft forever 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:02.181 192.168.100.9' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:02.181 192.168.100.9' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:02.181 192.168.100.9' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3614937 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3614937 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3614937 ']' 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.181 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:02.181 [2024-12-06 16:17:56.553272] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:05:02.181 [2024-12-06 16:17:56.553321] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:02.181 [2024-12-06 16:17:56.610697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.181 [2024-12-06 16:17:56.651196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:02.182 [2024-12-06 16:17:56.651233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:02.182 [2024-12-06 16:17:56.651240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.182 [2024-12-06 16:17:56.651245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.182 [2024-12-06 16:17:56.651250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:02.182 [2024-12-06 16:17:56.652463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.182 [2024-12-06 16:17:56.652549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.182 [2024-12-06 16:17:56.652551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.182 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.182 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:02.182 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:02.182 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.182 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.182 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:02.182 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:05:02.182 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.182 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.182 [2024-12-06 16:17:56.811175] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b4f800/0x1b53cf0) succeed. 00:05:02.182 [2024-12-06 16:17:56.827295] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b50df0/0x1b95390) succeed. 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.441 Malloc0 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.441 Delay0 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.441 [2024-12-06 16:17:56.987486] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.441 16:17:56 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:02.441 [2024-12-06 16:17:57.088357] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:04.976 Initializing NVMe Controllers 00:05:04.976 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:05:04.976 controller IO queue size 128 less than required 00:05:04.976 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:04.976 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:04.976 Initialization complete. Launching workers. 00:05:04.976 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 46696 00:05:04.976 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 46757, failed to submit 62 00:05:04.976 success 46697, unsuccessful 60, failed 0 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:05:04.976 rmmod nvme_rdma 00:05:04.976 rmmod nvme_fabrics 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3614937 ']' 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3614937 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3614937 ']' 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3614937 00:05:04.976 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3614937 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3614937' 00:05:04.977 killing process with pid 3614937 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3614937 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3614937 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:05:04.977 00:05:04.977 real 0m8.185s 00:05:04.977 user 0m11.970s 00:05:04.977 sys 0m4.120s 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:04.977 ************************************ 00:05:04.977 END TEST nvmf_abort 00:05:04.977 ************************************ 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:04.977 ************************************ 00:05:04.977 START TEST nvmf_ns_hotplug_stress 00:05:04.977 ************************************ 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:04.977 * Looking for test storage... 00:05:04.977 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.977 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.237 --rc genhtml_branch_coverage=1 00:05:05.237 --rc genhtml_function_coverage=1 00:05:05.237 --rc genhtml_legend=1 00:05:05.237 --rc geninfo_all_blocks=1 00:05:05.237 --rc geninfo_unexecuted_blocks=1 00:05:05.237 00:05:05.237 ' 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.237 --rc genhtml_branch_coverage=1 00:05:05.237 --rc genhtml_function_coverage=1 00:05:05.237 --rc genhtml_legend=1 00:05:05.237 --rc geninfo_all_blocks=1 00:05:05.237 --rc geninfo_unexecuted_blocks=1 00:05:05.237 00:05:05.237 ' 00:05:05.237 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.237 --rc genhtml_branch_coverage=1 00:05:05.238 --rc genhtml_function_coverage=1 00:05:05.238 --rc genhtml_legend=1 00:05:05.238 --rc geninfo_all_blocks=1 00:05:05.238 --rc geninfo_unexecuted_blocks=1 00:05:05.238 00:05:05.238 ' 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.238 --rc genhtml_branch_coverage=1 00:05:05.238 --rc genhtml_function_coverage=1 00:05:05.238 --rc genhtml_legend=1 00:05:05.238 --rc geninfo_all_blocks=1 00:05:05.238 --rc geninfo_unexecuted_blocks=1 00:05:05.238 00:05:05.238 ' 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.238 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:05.238 16:17:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:11.805 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:11.805 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:11.805 Found net devices under 0000:18:00.0: mlx_0_0 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:11.805 Found net devices under 0000:18:00.1: mlx_0_1 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:11.805 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:11.806 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:11.806 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:05:11.806 altname enp24s0f0np0 00:05:11.806 altname ens785f0np0 00:05:11.806 inet 192.168.100.8/24 scope global mlx_0_0 00:05:11.806 valid_lft forever preferred_lft forever 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:11.806 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:11.806 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:05:11.806 altname enp24s0f1np1 00:05:11.806 altname ens785f1np1 00:05:11.806 inet 192.168.100.9/24 scope global mlx_0_1 00:05:11.806 valid_lft forever preferred_lft forever 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:11.806 192.168.100.9' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:11.806 192.168.100.9' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:11.806 192.168.100.9' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:11.806 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3619303 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3619303 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3619303 ']' 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.807 [2024-12-06 16:18:05.492547] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:05:11.807 [2024-12-06 16:18:05.492587] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:11.807 [2024-12-06 16:18:05.551264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.807 [2024-12-06 16:18:05.589389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:11.807 [2024-12-06 16:18:05.589424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:11.807 [2024-12-06 16:18:05.589431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:11.807 [2024-12-06 16:18:05.589436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:11.807 [2024-12-06 16:18:05.589441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:11.807 [2024-12-06 16:18:05.590720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.807 [2024-12-06 16:18:05.590804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.807 [2024-12-06 16:18:05.590805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:11.807 16:18:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:05:11.807 [2024-12-06 16:18:05.909436] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a35800/0x1a39cf0) succeed. 00:05:11.807 [2024-12-06 16:18:05.917613] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a36df0/0x1a7b390) succeed. 00:05:11.807 16:18:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:11.807 16:18:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:11.807 [2024-12-06 16:18:06.387952] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:11.807 16:18:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:12.065 16:18:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:12.065 Malloc0 00:05:12.065 16:18:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:12.324 Delay0 00:05:12.324 16:18:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.583 16:18:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:12.583 NULL1 00:05:12.583 16:18:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:12.842 16:18:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:12.842 16:18:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3619788 00:05:12.842 16:18:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:12.842 16:18:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.217 Read completed with error (sct=0, sc=11) 00:05:14.217 16:18:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.217 16:18:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:14.217 16:18:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:14.475 true 00:05:14.475 16:18:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:14.475 16:18:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.414 16:18:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.415 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.415 16:18:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:15.415 16:18:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:15.719 true 00:05:15.719 16:18:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:15.719 16:18:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.321 16:18:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.579 16:18:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:16.579 16:18:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:16.837 true 00:05:16.837 16:18:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:16.837 16:18:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.799 16:18:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.799 16:18:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:17.799 16:18:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:18.058 true 00:05:18.058 16:18:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:18.058 16:18:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.995 16:18:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.995 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.995 16:18:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:18.995 16:18:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:19.254 true 00:05:19.254 16:18:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:19.254 16:18:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.191 16:18:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.191 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.191 16:18:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:20.191 16:18:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:20.450 true 00:05:20.450 16:18:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:20.450 16:18:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.384 16:18:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.384 16:18:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:21.384 16:18:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:21.643 true 00:05:21.643 16:18:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:21.643 16:18:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.581 16:18:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.581 16:18:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:22.581 16:18:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:22.840 true 00:05:22.840 16:18:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:22.840 16:18:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.777 16:18:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.777 16:18:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:23.777 16:18:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:24.036 true 00:05:24.036 16:18:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:24.036 16:18:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.973 16:18:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.973 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.973 16:18:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:24.973 16:18:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:25.232 true 00:05:25.232 16:18:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:25.232 16:18:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.167 16:18:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.168 16:18:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:26.168 16:18:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:26.426 true 00:05:26.426 16:18:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:26.426 16:18:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.359 16:18:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.617 16:18:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:27.617 16:18:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:27.617 true 00:05:27.617 16:18:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:27.617 16:18:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.597 16:18:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.597 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.598 16:18:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:28.598 16:18:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:28.855 true 00:05:28.855 16:18:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:28.855 16:18:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.789 16:18:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.789 16:18:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:29.789 16:18:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:30.047 true 00:05:30.047 16:18:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:30.047 16:18:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.978 16:18:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.978 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.978 16:18:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:30.978 16:18:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:31.235 true 00:05:31.235 16:18:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:31.235 16:18:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.168 16:18:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:32.168 16:18:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:32.168 16:18:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:32.426 true 00:05:32.426 16:18:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:32.426 16:18:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.363 16:18:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.363 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.622 16:18:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:33.622 16:18:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:33.622 true 00:05:33.622 16:18:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:33.622 16:18:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.559 16:18:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.818 16:18:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:34.818 16:18:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:34.818 true 00:05:34.818 16:18:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:34.818 16:18:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.754 16:18:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.013 16:18:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:36.013 16:18:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:36.013 true 00:05:36.273 16:18:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:36.273 16:18:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.841 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.841 16:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.100 16:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:37.100 16:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:37.360 true 00:05:37.360 16:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:37.360 16:18:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.298 16:18:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.298 16:18:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:38.298 16:18:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:38.556 true 00:05:38.556 16:18:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:38.556 16:18:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.495 16:18:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.754 16:18:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:39.754 16:18:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:39.754 true 00:05:39.754 16:18:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:39.754 16:18:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.691 16:18:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.691 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.949 16:18:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:40.949 16:18:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:40.949 true 00:05:40.949 16:18:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:40.949 16:18:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.884 16:18:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.143 16:18:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:42.143 16:18:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:42.143 true 00:05:42.143 16:18:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:42.143 16:18:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.079 16:18:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.338 16:18:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:43.338 16:18:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:43.338 true 00:05:43.338 16:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:43.338 16:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.596 16:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.854 16:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:43.854 16:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:43.854 true 00:05:43.854 16:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:43.854 16:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.112 16:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.371 16:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:44.371 16:18:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:44.371 true 00:05:44.630 16:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:44.630 16:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.630 16:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.888 16:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:44.888 16:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:45.146 true 00:05:45.146 16:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:45.146 16:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.146 Initializing NVMe Controllers 00:05:45.146 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:05:45.146 Controller IO queue size 128, less than required. 00:05:45.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:45.146 Controller IO queue size 128, less than required. 00:05:45.146 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:45.146 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:45.146 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:45.146 Initialization complete. Launching workers. 00:05:45.146 ======================================================== 00:05:45.146 Latency(us) 00:05:45.146 Device Information : IOPS MiB/s Average min max 00:05:45.146 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6193.87 3.02 18001.81 881.38 1126814.49 00:05:45.146 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35562.40 17.36 3599.28 2105.29 268824.20 00:05:45.146 ======================================================== 00:05:45.146 Total : 41756.27 20.39 5735.66 881.38 1126814.49 00:05:45.146 00:05:45.146 16:18:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.404 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:45.404 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:45.663 true 00:05:45.663 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3619788 00:05:45.663 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3619788) - No such process 00:05:45.663 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3619788 00:05:45.663 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.663 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.922 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:45.922 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:45.922 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:45.922 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.922 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:46.180 null0 00:05:46.180 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.180 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.180 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:46.437 null1 00:05:46.437 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.437 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.437 16:18:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:46.437 null2 00:05:46.437 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.437 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.437 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:46.694 null3 00:05:46.694 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.694 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.694 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:46.952 null4 00:05:46.952 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.952 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.952 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:46.952 null5 00:05:46.952 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.952 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.952 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:47.209 null6 00:05:47.209 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.209 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.209 16:18:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:47.468 null7 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.468 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3626437 3626438 3626440 3626442 3626444 3626446 3626448 3626450 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.469 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.728 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.987 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.246 16:18:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.504 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.504 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.504 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.504 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.504 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.504 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.504 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.505 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.505 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.505 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.505 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.505 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.505 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.505 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.505 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.505 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.505 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.781 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.041 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.301 16:18:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.301 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.301 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.301 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.301 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.301 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.301 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.301 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.301 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.301 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.561 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.561 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.561 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.561 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.561 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.561 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.561 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.561 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.821 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.080 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.081 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.081 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.081 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.081 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.081 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.081 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.340 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.340 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.340 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.340 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.340 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.340 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.340 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.340 16:18:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.597 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:50.856 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:51.115 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:51.115 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:51.115 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.115 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:51.115 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:51.115 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:51.115 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:51.115 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:05:51.375 rmmod nvme_rdma 00:05:51.375 rmmod nvme_fabrics 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3619303 ']' 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3619303 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3619303 ']' 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3619303 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.375 16:18:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3619303 00:05:51.375 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:51.375 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:51.375 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3619303' 00:05:51.375 killing process with pid 3619303 00:05:51.375 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3619303 00:05:51.375 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3619303 00:05:51.635 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:51.635 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:05:51.635 00:05:51.635 real 0m46.669s 00:05:51.635 user 3m16.952s 00:05:51.635 sys 0m11.326s 00:05:51.635 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.635 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:51.635 ************************************ 00:05:51.635 END TEST nvmf_ns_hotplug_stress 00:05:51.635 ************************************ 00:05:51.635 16:18:46 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:05:51.635 16:18:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:51.635 16:18:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.635 16:18:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:51.635 ************************************ 00:05:51.635 START TEST nvmf_delete_subsystem 00:05:51.635 ************************************ 00:05:51.635 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:05:51.895 * Looking for test storage... 00:05:51.895 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.895 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.896 --rc genhtml_branch_coverage=1 00:05:51.896 --rc genhtml_function_coverage=1 00:05:51.896 --rc genhtml_legend=1 00:05:51.896 --rc geninfo_all_blocks=1 00:05:51.896 --rc geninfo_unexecuted_blocks=1 00:05:51.896 00:05:51.896 ' 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.896 --rc genhtml_branch_coverage=1 00:05:51.896 --rc genhtml_function_coverage=1 00:05:51.896 --rc genhtml_legend=1 00:05:51.896 --rc geninfo_all_blocks=1 00:05:51.896 --rc geninfo_unexecuted_blocks=1 00:05:51.896 00:05:51.896 ' 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.896 --rc genhtml_branch_coverage=1 00:05:51.896 --rc genhtml_function_coverage=1 00:05:51.896 --rc genhtml_legend=1 00:05:51.896 --rc geninfo_all_blocks=1 00:05:51.896 --rc geninfo_unexecuted_blocks=1 00:05:51.896 00:05:51.896 ' 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.896 --rc genhtml_branch_coverage=1 00:05:51.896 --rc genhtml_function_coverage=1 00:05:51.896 --rc genhtml_legend=1 00:05:51.896 --rc geninfo_all_blocks=1 00:05:51.896 --rc geninfo_unexecuted_blocks=1 00:05:51.896 00:05:51.896 ' 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.896 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:51.896 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:51.897 16:18:46 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:57.172 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:57.172 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:57.172 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:57.173 Found net devices under 0000:18:00.0: mlx_0_0 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:57.173 Found net devices under 0000:18:00.1: mlx_0_1 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:57.173 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:57.434 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:57.434 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:05:57.434 altname enp24s0f0np0 00:05:57.434 altname ens785f0np0 00:05:57.434 inet 192.168.100.8/24 scope global mlx_0_0 00:05:57.434 valid_lft forever preferred_lft forever 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:57.434 16:18:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:57.434 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:57.434 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:05:57.434 altname enp24s0f1np1 00:05:57.434 altname ens785f1np1 00:05:57.434 inet 192.168.100.9/24 scope global mlx_0_1 00:05:57.434 valid_lft forever preferred_lft forever 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:57.434 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:57.435 192.168.100.9' 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:57.435 192.168.100.9' 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:57.435 192.168.100.9' 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3630640 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3630640 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3630640 ']' 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.435 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.435 [2024-12-06 16:18:52.157160] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:05:57.435 [2024-12-06 16:18:52.157206] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:57.695 [2024-12-06 16:18:52.216335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.695 [2024-12-06 16:18:52.254895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:57.695 [2024-12-06 16:18:52.254928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:57.695 [2024-12-06 16:18:52.254936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:57.695 [2024-12-06 16:18:52.254942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:57.695 [2024-12-06 16:18:52.254947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:57.695 [2024-12-06 16:18:52.255991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.695 [2024-12-06 16:18:52.255993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.695 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.695 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:57.695 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:57.695 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:57.695 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.695 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:57.695 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:05:57.695 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.695 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.695 [2024-12-06 16:18:52.406534] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1447940/0x144be30) succeed. 00:05:57.695 [2024-12-06 16:18:52.414318] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1448e90/0x148d4d0) succeed. 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.955 [2024-12-06 16:18:52.496432] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.955 NULL1 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.955 Delay0 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3630668 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:57.955 16:18:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:57.955 [2024-12-06 16:18:52.602872] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:59.943 16:18:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:59.943 16:18:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.943 16:18:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.915 NVMe io qpair process completion error 00:06:00.915 NVMe io qpair process completion error 00:06:01.172 NVMe io qpair process completion error 00:06:01.172 NVMe io qpair process completion error 00:06:01.172 NVMe io qpair process completion error 00:06:01.172 NVMe io qpair process completion error 00:06:01.172 16:18:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.172 16:18:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:01.172 16:18:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3630668 00:06:01.172 16:18:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:01.739 16:18:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:01.739 16:18:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3630668 00:06:01.739 16:18:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Write completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.998 Read completed with error (sct=0, sc=8) 00:06:01.998 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 starting I/O failed: -6 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Write completed with error (sct=0, sc=8) 00:06:01.999 Read completed with error (sct=0, sc=8) 00:06:01.999 Initializing NVMe Controllers 00:06:01.999 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:01.999 Controller IO queue size 128, less than required. 00:06:01.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:01.999 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:01.999 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:01.999 Initialization complete. Launching workers. 00:06:01.999 ======================================================== 00:06:01.999 Latency(us) 00:06:01.999 Device Information : IOPS MiB/s Average min max 00:06:02.000 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.38 0.04 1595239.81 1000531.71 2980798.50 00:06:02.000 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.38 0.04 1596670.89 1000959.68 2982421.89 00:06:02.000 ======================================================== 00:06:02.000 Total : 160.77 0.08 1595955.35 1000531.71 2982421.89 00:06:02.000 00:06:02.000 16:18:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:02.000 16:18:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3630668 00:06:02.000 16:18:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:02.000 [2024-12-06 16:18:56.689639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:06:02.000 [2024-12-06 16:18:56.689680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:06:02.000 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:02.566 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3630668 00:06:02.567 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3630668) - No such process 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3630668 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3630668 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3630668 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.567 [2024-12-06 16:18:57.206490] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3631473 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:02.567 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:02.825 [2024-12-06 16:18:57.296660] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:03.084 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.084 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:03.084 16:18:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.654 16:18:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.654 16:18:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:03.654 16:18:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.219 16:18:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.219 16:18:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:04.219 16:18:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.784 16:18:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.784 16:18:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:04.784 16:18:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.042 16:18:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.042 16:18:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:05.042 16:18:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.608 16:19:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.608 16:19:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:05.608 16:19:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.174 16:19:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.174 16:19:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:06.174 16:19:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.742 16:19:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.742 16:19:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:06.742 16:19:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.311 16:19:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.311 16:19:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:07.311 16:19:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.571 16:19:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.571 16:19:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:07.571 16:19:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:08.140 16:19:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:08.141 16:19:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:08.141 16:19:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:08.710 16:19:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:08.710 16:19:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:08.710 16:19:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.278 16:19:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:09.278 16:19:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:09.278 16:19:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.847 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:09.847 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:09.847 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.847 Initializing NVMe Controllers 00:06:09.847 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:09.847 Controller IO queue size 128, less than required. 00:06:09.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:09.847 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:09.847 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:09.847 Initialization complete. Launching workers. 00:06:09.847 ======================================================== 00:06:09.847 Latency(us) 00:06:09.847 Device Information : IOPS MiB/s Average min max 00:06:09.847 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001214.77 1000048.14 1003965.49 00:06:09.847 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002350.61 1000100.11 1006152.63 00:06:09.847 ======================================================== 00:06:09.847 Total : 256.00 0.12 1001782.69 1000048.14 1006152.63 00:06:09.847 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3631473 00:06:10.107 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3631473) - No such process 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3631473 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:10.107 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:10.107 rmmod nvme_rdma 00:06:10.107 rmmod nvme_fabrics 00:06:10.366 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3630640 ']' 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3630640 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3630640 ']' 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3630640 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3630640 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3630640' 00:06:10.367 killing process with pid 3630640 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3630640 00:06:10.367 16:19:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3630640 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:10.626 00:06:10.626 real 0m18.759s 00:06:10.626 user 0m48.432s 00:06:10.626 sys 0m5.209s 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:10.626 ************************************ 00:06:10.626 END TEST nvmf_delete_subsystem 00:06:10.626 ************************************ 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.626 ************************************ 00:06:10.626 START TEST nvmf_host_management 00:06:10.626 ************************************ 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:06:10.626 * Looking for test storage... 00:06:10.626 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.626 --rc genhtml_branch_coverage=1 00:06:10.626 --rc genhtml_function_coverage=1 00:06:10.626 --rc genhtml_legend=1 00:06:10.626 --rc geninfo_all_blocks=1 00:06:10.626 --rc geninfo_unexecuted_blocks=1 00:06:10.626 00:06:10.626 ' 00:06:10.626 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.626 --rc genhtml_branch_coverage=1 00:06:10.627 --rc genhtml_function_coverage=1 00:06:10.627 --rc genhtml_legend=1 00:06:10.627 --rc geninfo_all_blocks=1 00:06:10.627 --rc geninfo_unexecuted_blocks=1 00:06:10.627 00:06:10.627 ' 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.627 --rc genhtml_branch_coverage=1 00:06:10.627 --rc genhtml_function_coverage=1 00:06:10.627 --rc genhtml_legend=1 00:06:10.627 --rc geninfo_all_blocks=1 00:06:10.627 --rc geninfo_unexecuted_blocks=1 00:06:10.627 00:06:10.627 ' 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.627 --rc genhtml_branch_coverage=1 00:06:10.627 --rc genhtml_function_coverage=1 00:06:10.627 --rc genhtml_legend=1 00:06:10.627 --rc geninfo_all_blocks=1 00:06:10.627 --rc geninfo_unexecuted_blocks=1 00:06:10.627 00:06:10.627 ' 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:10.627 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.886 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.886 16:19:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:06:16.163 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:06:16.163 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:16.163 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:16.164 Found net devices under 0000:18:00.0: mlx_0_0 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:16.164 Found net devices under 0000:18:00.1: mlx_0_1 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:16.164 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:16.164 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:06:16.164 altname enp24s0f0np0 00:06:16.164 altname ens785f0np0 00:06:16.164 inet 192.168.100.8/24 scope global mlx_0_0 00:06:16.164 valid_lft forever preferred_lft forever 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:16.164 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:16.164 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:06:16.164 altname enp24s0f1np1 00:06:16.164 altname ens785f1np1 00:06:16.164 inet 192.168.100.9/24 scope global mlx_0_1 00:06:16.164 valid_lft forever preferred_lft forever 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:16.164 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:16.165 192.168.100.9' 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:16.165 192.168.100.9' 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:16.165 192.168.100.9' 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3636253 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3636253 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3636253 ']' 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:16.165 [2024-12-06 16:19:10.576791] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:06:16.165 [2024-12-06 16:19:10.576835] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.165 [2024-12-06 16:19:10.634544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.165 [2024-12-06 16:19:10.674232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.165 [2024-12-06 16:19:10.674270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.165 [2024-12-06 16:19:10.674276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.165 [2024-12-06 16:19:10.674282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.165 [2024-12-06 16:19:10.674288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.165 [2024-12-06 16:19:10.675527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.165 [2024-12-06 16:19:10.675609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.165 [2024-12-06 16:19:10.675715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.165 [2024-12-06 16:19:10.675716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.165 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.165 [2024-12-06 16:19:10.831275] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16f33c0/0x16f78b0) succeed. 00:06:16.165 [2024-12-06 16:19:10.840148] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16f4a50/0x1738f50) succeed. 00:06:16.424 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.424 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:16.424 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.424 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.424 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:16.424 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:16.424 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:16.424 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.424 16:19:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.424 Malloc0 00:06:16.424 [2024-12-06 16:19:11.015913] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3636357 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3636357 /var/tmp/bdevperf.sock 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3636357 ']' 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:16.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:16.424 { 00:06:16.424 "params": { 00:06:16.424 "name": "Nvme$subsystem", 00:06:16.424 "trtype": "$TEST_TRANSPORT", 00:06:16.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:16.424 "adrfam": "ipv4", 00:06:16.424 "trsvcid": "$NVMF_PORT", 00:06:16.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:16.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:16.424 "hdgst": ${hdgst:-false}, 00:06:16.424 "ddgst": ${ddgst:-false} 00:06:16.424 }, 00:06:16.424 "method": "bdev_nvme_attach_controller" 00:06:16.424 } 00:06:16.424 EOF 00:06:16.424 )") 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:16.424 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:16.424 "params": { 00:06:16.424 "name": "Nvme0", 00:06:16.424 "trtype": "rdma", 00:06:16.424 "traddr": "192.168.100.8", 00:06:16.424 "adrfam": "ipv4", 00:06:16.424 "trsvcid": "4420", 00:06:16.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:16.424 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:16.424 "hdgst": false, 00:06:16.424 "ddgst": false 00:06:16.424 }, 00:06:16.424 "method": "bdev_nvme_attach_controller" 00:06:16.424 }' 00:06:16.424 [2024-12-06 16:19:11.106496] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:06:16.424 [2024-12-06 16:19:11.106538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636357 ] 00:06:16.682 [2024-12-06 16:19:11.164201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.682 [2024-12-06 16:19:11.202205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.682 Running I/O for 10 seconds... 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=170 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 170 -ge 100 ']' 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.940 16:19:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:17.878 295.00 IOPS, 18.44 MiB/s [2024-12-06T15:19:12.606Z] [2024-12-06 16:19:12.500965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106d1c00 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.500996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106c1b80 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106b1b00 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106a1a80 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010691a00 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010681980 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010671900 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010661880 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010651800 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010641780 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010631700 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010621680 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010611600 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010601580 len:0x10000 key:0x182a00 00:06:17.878 [2024-12-06 16:19:12.501191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000170cfe80 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000170bfe00 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000170afd80 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001709fd00 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001708fc80 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001707fc00 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001706fb80 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001705fb00 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001704fa80 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001703fa00 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001702f980 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001701f900 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001700f880 len:0x10000 key:0x182000 00:06:17.878 [2024-12-06 16:19:12.501373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.878 [2024-12-06 16:19:12.501385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016eeff80 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016edff00 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016ecfe80 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016ebfe00 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016eafd80 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e9fd00 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e8fc80 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e7fc00 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e6fb80 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e5fb00 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e4fa80 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016e3fa00 len:0x10000 key:0x182100 00:06:17.879 [2024-12-06 16:19:12.501542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ac4000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009178000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009157000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000097a8000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009787000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009dd8000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200009db7000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a408000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3e7000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3c6000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3a5000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a384000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a363000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a342000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a321000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a300000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6ff000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6de000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6bd000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a69c000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a67b000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a65a000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a639000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a618000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.879 [2024-12-06 16:19:12.501886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a5f7000 len:0x10000 key:0x182900 00:06:17.879 [2024-12-06 16:19:12.501892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:57b78000 sqhd:7210 p:0 m:0 dnr:0 00:06:17.880 [2024-12-06 16:19:12.504414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:17.880 task offset: 40960 on job bdev=Nvme0n1 fails 00:06:17.880 00:06:17.880 Latency(us) 00:06:17.880 [2024-12-06T15:19:12.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:17.880 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:17.880 Job: Nvme0n1 ended in about 1.12 seconds with error 00:06:17.880 Verification LBA range: start 0x0 length 0x400 00:06:17.880 Nvme0n1 : 1.12 263.26 16.45 57.11 0.00 198224.31 2208.81 1012846.74 00:06:17.880 [2024-12-06T15:19:12.608Z] =================================================================================================================== 00:06:17.880 [2024-12-06T15:19:12.608Z] Total : 263.26 16.45 57.11 0.00 198224.31 2208.81 1012846.74 00:06:17.880 [2024-12-06 16:19:12.505943] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3636357 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:17.880 { 00:06:17.880 "params": { 00:06:17.880 "name": "Nvme$subsystem", 00:06:17.880 "trtype": "$TEST_TRANSPORT", 00:06:17.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:17.880 "adrfam": "ipv4", 00:06:17.880 "trsvcid": "$NVMF_PORT", 00:06:17.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:17.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:17.880 "hdgst": ${hdgst:-false}, 00:06:17.880 "ddgst": ${ddgst:-false} 00:06:17.880 }, 00:06:17.880 "method": "bdev_nvme_attach_controller" 00:06:17.880 } 00:06:17.880 EOF 00:06:17.880 )") 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:17.880 16:19:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:17.880 "params": { 00:06:17.880 "name": "Nvme0", 00:06:17.880 "trtype": "rdma", 00:06:17.880 "traddr": "192.168.100.8", 00:06:17.880 "adrfam": "ipv4", 00:06:17.880 "trsvcid": "4420", 00:06:17.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:17.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:17.880 "hdgst": false, 00:06:17.880 "ddgst": false 00:06:17.880 }, 00:06:17.880 "method": "bdev_nvme_attach_controller" 00:06:17.880 }' 00:06:17.880 [2024-12-06 16:19:12.554993] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:06:17.880 [2024-12-06 16:19:12.555039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636630 ] 00:06:18.138 [2024-12-06 16:19:12.612180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.138 [2024-12-06 16:19:12.649931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.138 Running I/O for 1 seconds... 00:06:19.514 3304.00 IOPS, 206.50 MiB/s 00:06:19.514 Latency(us) 00:06:19.514 [2024-12-06T15:19:14.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.514 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:19.514 Verification LBA range: start 0x0 length 0x400 00:06:19.514 Nvme0n1 : 1.01 3327.97 208.00 0.00 0.00 18849.74 576.47 30874.74 00:06:19.514 [2024-12-06T15:19:14.242Z] =================================================================================================================== 00:06:19.514 [2024-12-06T15:19:14.242Z] Total : 3327.97 208.00 0.00 0.00 18849.74 576.47 30874.74 00:06:19.514 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3636357 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:19.514 rmmod nvme_rdma 00:06:19.514 rmmod nvme_fabrics 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3636253 ']' 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3636253 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3636253 ']' 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3636253 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3636253 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3636253' 00:06:19.514 killing process with pid 3636253 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3636253 00:06:19.514 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3636253 00:06:19.773 [2024-12-06 16:19:14.352539] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:19.773 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:19.773 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:19.773 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:19.773 00:06:19.773 real 0m9.203s 00:06:19.773 user 0m18.770s 00:06:19.773 sys 0m4.732s 00:06:19.773 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.773 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.773 ************************************ 00:06:19.773 END TEST nvmf_host_management 00:06:19.773 ************************************ 00:06:19.773 16:19:14 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:06:19.773 16:19:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.773 16:19:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.773 16:19:14 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.773 ************************************ 00:06:19.773 START TEST nvmf_lvol 00:06:19.773 ************************************ 00:06:19.773 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:06:20.032 * Looking for test storage... 00:06:20.032 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.032 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:20.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.033 --rc genhtml_branch_coverage=1 00:06:20.033 --rc genhtml_function_coverage=1 00:06:20.033 --rc genhtml_legend=1 00:06:20.033 --rc geninfo_all_blocks=1 00:06:20.033 --rc geninfo_unexecuted_blocks=1 00:06:20.033 00:06:20.033 ' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:20.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.033 --rc genhtml_branch_coverage=1 00:06:20.033 --rc genhtml_function_coverage=1 00:06:20.033 --rc genhtml_legend=1 00:06:20.033 --rc geninfo_all_blocks=1 00:06:20.033 --rc geninfo_unexecuted_blocks=1 00:06:20.033 00:06:20.033 ' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:20.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.033 --rc genhtml_branch_coverage=1 00:06:20.033 --rc genhtml_function_coverage=1 00:06:20.033 --rc genhtml_legend=1 00:06:20.033 --rc geninfo_all_blocks=1 00:06:20.033 --rc geninfo_unexecuted_blocks=1 00:06:20.033 00:06:20.033 ' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:20.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.033 --rc genhtml_branch_coverage=1 00:06:20.033 --rc genhtml_function_coverage=1 00:06:20.033 --rc genhtml_legend=1 00:06:20.033 --rc geninfo_all_blocks=1 00:06:20.033 --rc geninfo_unexecuted_blocks=1 00:06:20.033 00:06:20.033 ' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.033 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.033 16:19:14 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:06:25.317 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:06:25.317 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:25.317 16:19:19 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:25.317 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:25.317 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:25.317 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:25.317 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:25.317 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:25.318 Found net devices under 0000:18:00.0: mlx_0_0 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:25.318 Found net devices under 0000:18:00.1: mlx_0_1 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:25.318 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:25.578 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:25.578 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:06:25.578 altname enp24s0f0np0 00:06:25.578 altname ens785f0np0 00:06:25.578 inet 192.168.100.8/24 scope global mlx_0_0 00:06:25.578 valid_lft forever preferred_lft forever 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:25.578 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:25.578 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:06:25.578 altname enp24s0f1np1 00:06:25.578 altname ens785f1np1 00:06:25.578 inet 192.168.100.9/24 scope global mlx_0_1 00:06:25.578 valid_lft forever preferred_lft forever 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:25.578 192.168.100.9' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:25.578 192.168.100.9' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:25.578 192.168.100.9' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3640164 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3640164 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3640164 ']' 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.578 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:25.578 [2024-12-06 16:19:20.255571] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:06:25.579 [2024-12-06 16:19:20.255625] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.837 [2024-12-06 16:19:20.320239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.837 [2024-12-06 16:19:20.361742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:25.837 [2024-12-06 16:19:20.361773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:25.837 [2024-12-06 16:19:20.361780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.838 [2024-12-06 16:19:20.361786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.838 [2024-12-06 16:19:20.361791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:25.838 [2024-12-06 16:19:20.363016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.838 [2024-12-06 16:19:20.363037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.838 [2024-12-06 16:19:20.363042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.838 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.838 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:25.838 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:25.838 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.838 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:25.838 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.838 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:26.096 [2024-12-06 16:19:20.674836] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa2f500/0xa339f0) succeed. 00:06:26.096 [2024-12-06 16:19:20.682866] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa30af0/0xa75090) succeed. 00:06:26.096 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.355 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:26.355 16:19:20 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.613 16:19:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:26.613 16:19:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:26.613 16:19:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:26.872 16:19:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=eb7acda2-23a0-4582-b2e1-f58bae0242bd 00:06:26.872 16:19:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eb7acda2-23a0-4582-b2e1-f58bae0242bd lvol 20 00:06:27.132 16:19:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=77abc2ff-c2f6-45c9-ba28-6cb03b818dae 00:06:27.132 16:19:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:27.392 16:19:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 77abc2ff-c2f6-45c9-ba28-6cb03b818dae 00:06:27.392 16:19:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:27.652 [2024-12-06 16:19:22.217353] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:27.652 16:19:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:27.911 16:19:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3640715 00:06:27.911 16:19:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:27.911 16:19:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:28.849 16:19:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 77abc2ff-c2f6-45c9-ba28-6cb03b818dae MY_SNAPSHOT 00:06:29.108 16:19:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=808fe9de-c27f-411c-878e-c7300e50b0c1 00:06:29.108 16:19:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 77abc2ff-c2f6-45c9-ba28-6cb03b818dae 30 00:06:29.108 16:19:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 808fe9de-c27f-411c-878e-c7300e50b0c1 MY_CLONE 00:06:29.368 16:19:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=da5ace93-c67a-421d-b61c-2f6b515e2f11 00:06:29.368 16:19:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate da5ace93-c67a-421d-b61c-2f6b515e2f11 00:06:29.627 16:19:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3640715 00:06:39.603 Initializing NVMe Controllers 00:06:39.603 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:06:39.603 Controller IO queue size 128, less than required. 00:06:39.603 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:39.603 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:39.603 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:39.603 Initialization complete. Launching workers. 00:06:39.603 ======================================================== 00:06:39.603 Latency(us) 00:06:39.603 Device Information : IOPS MiB/s Average min max 00:06:39.603 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16906.00 66.04 7573.30 2056.07 44688.21 00:06:39.603 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16816.10 65.69 7613.41 3761.84 48748.54 00:06:39.603 ======================================================== 00:06:39.603 Total : 33722.10 131.73 7593.30 2056.07 48748.54 00:06:39.603 00:06:39.603 16:19:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:39.603 16:19:33 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 77abc2ff-c2f6-45c9-ba28-6cb03b818dae 00:06:39.603 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb7acda2-23a0-4582-b2e1-f58bae0242bd 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:39.862 rmmod nvme_rdma 00:06:39.862 rmmod nvme_fabrics 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3640164 ']' 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3640164 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3640164 ']' 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3640164 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3640164 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3640164' 00:06:39.862 killing process with pid 3640164 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3640164 00:06:39.862 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3640164 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:40.122 00:06:40.122 real 0m20.233s 00:06:40.122 user 1m9.106s 00:06:40.122 sys 0m5.150s 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:40.122 ************************************ 00:06:40.122 END TEST nvmf_lvol 00:06:40.122 ************************************ 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.122 ************************************ 00:06:40.122 START TEST nvmf_lvs_grow 00:06:40.122 ************************************ 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:06:40.122 * Looking for test storage... 00:06:40.122 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.122 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.386 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.387 --rc genhtml_branch_coverage=1 00:06:40.387 --rc genhtml_function_coverage=1 00:06:40.387 --rc genhtml_legend=1 00:06:40.387 --rc geninfo_all_blocks=1 00:06:40.387 --rc geninfo_unexecuted_blocks=1 00:06:40.387 00:06:40.387 ' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.387 --rc genhtml_branch_coverage=1 00:06:40.387 --rc genhtml_function_coverage=1 00:06:40.387 --rc genhtml_legend=1 00:06:40.387 --rc geninfo_all_blocks=1 00:06:40.387 --rc geninfo_unexecuted_blocks=1 00:06:40.387 00:06:40.387 ' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.387 --rc genhtml_branch_coverage=1 00:06:40.387 --rc genhtml_function_coverage=1 00:06:40.387 --rc genhtml_legend=1 00:06:40.387 --rc geninfo_all_blocks=1 00:06:40.387 --rc geninfo_unexecuted_blocks=1 00:06:40.387 00:06:40.387 ' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.387 --rc genhtml_branch_coverage=1 00:06:40.387 --rc genhtml_function_coverage=1 00:06:40.387 --rc genhtml_legend=1 00:06:40.387 --rc geninfo_all_blocks=1 00:06:40.387 --rc geninfo_unexecuted_blocks=1 00:06:40.387 00:06:40.387 ' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.387 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.387 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.388 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.388 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.388 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.388 16:19:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.957 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:06:46.958 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:06:46.958 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:46.958 Found net devices under 0000:18:00.0: mlx_0_0 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:46.958 Found net devices under 0000:18:00.1: mlx_0_1 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.958 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:46.959 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:46.959 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:06:46.959 altname enp24s0f0np0 00:06:46.959 altname ens785f0np0 00:06:46.959 inet 192.168.100.8/24 scope global mlx_0_0 00:06:46.959 valid_lft forever preferred_lft forever 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:46.959 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:46.959 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:06:46.959 altname enp24s0f1np1 00:06:46.959 altname ens785f1np1 00:06:46.959 inet 192.168.100.9/24 scope global mlx_0_1 00:06:46.959 valid_lft forever preferred_lft forever 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:46.959 192.168.100.9' 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:46.959 192.168.100.9' 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:46.959 192.168.100.9' 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:06:46.959 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3646190 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3646190 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3646190 ']' 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.960 [2024-12-06 16:19:40.730411] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:06:46.960 [2024-12-06 16:19:40.730458] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.960 [2024-12-06 16:19:40.792495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.960 [2024-12-06 16:19:40.832144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.960 [2024-12-06 16:19:40.832181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.960 [2024-12-06 16:19:40.832187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.960 [2024-12-06 16:19:40.832192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.960 [2024-12-06 16:19:40.832197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.960 [2024-12-06 16:19:40.832689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.960 16:19:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:46.960 [2024-12-06 16:19:41.139649] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10b3dc0/0x10b82b0) succeed. 00:06:46.960 [2024-12-06 16:19:41.147303] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10b5270/0x10f9950) succeed. 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.960 ************************************ 00:06:46.960 START TEST lvs_grow_clean 00:06:46.960 ************************************ 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a2268991-b69e-4e03-bc29-81e3f00808da 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2268991-b69e-4e03-bc29-81e3f00808da 00:06:46.960 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:47.220 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:47.220 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:47.220 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a2268991-b69e-4e03-bc29-81e3f00808da lvol 150 00:06:47.220 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1b1e8df4-0508-4887-b480-aad46a46590f 00:06:47.220 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:47.220 16:19:41 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:47.479 [2024-12-06 16:19:42.088037] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:47.479 [2024-12-06 16:19:42.088089] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:47.479 true 00:06:47.479 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2268991-b69e-4e03-bc29-81e3f00808da 00:06:47.479 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:47.738 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:47.738 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:47.738 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1b1e8df4-0508-4887-b480-aad46a46590f 00:06:47.997 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:48.255 [2024-12-06 16:19:42.786263] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:48.255 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:48.513 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:48.513 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3646658 00:06:48.513 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:48.513 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3646658 /var/tmp/bdevperf.sock 00:06:48.513 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3646658 ']' 00:06:48.513 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:48.513 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.513 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:48.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:48.513 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.513 16:19:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:48.513 [2024-12-06 16:19:43.024453] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:06:48.514 [2024-12-06 16:19:43.024499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646658 ] 00:06:48.514 [2024-12-06 16:19:43.080196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.514 [2024-12-06 16:19:43.119080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.514 16:19:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.514 16:19:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:48.514 16:19:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:48.772 Nvme0n1 00:06:48.772 16:19:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:49.030 [ 00:06:49.030 { 00:06:49.030 "name": "Nvme0n1", 00:06:49.030 "aliases": [ 00:06:49.030 "1b1e8df4-0508-4887-b480-aad46a46590f" 00:06:49.030 ], 00:06:49.031 "product_name": "NVMe disk", 00:06:49.031 "block_size": 4096, 00:06:49.031 "num_blocks": 38912, 00:06:49.031 "uuid": "1b1e8df4-0508-4887-b480-aad46a46590f", 00:06:49.031 "numa_id": 0, 00:06:49.031 "assigned_rate_limits": { 00:06:49.031 "rw_ios_per_sec": 0, 00:06:49.031 "rw_mbytes_per_sec": 0, 00:06:49.031 "r_mbytes_per_sec": 0, 00:06:49.031 "w_mbytes_per_sec": 0 00:06:49.031 }, 00:06:49.031 "claimed": false, 00:06:49.031 "zoned": false, 00:06:49.031 "supported_io_types": { 00:06:49.031 "read": true, 00:06:49.031 "write": true, 00:06:49.031 "unmap": true, 00:06:49.031 "flush": true, 00:06:49.031 "reset": true, 00:06:49.031 "nvme_admin": true, 00:06:49.031 "nvme_io": true, 00:06:49.031 "nvme_io_md": false, 00:06:49.031 "write_zeroes": true, 00:06:49.031 "zcopy": false, 00:06:49.031 "get_zone_info": false, 00:06:49.031 "zone_management": false, 00:06:49.031 "zone_append": false, 00:06:49.031 "compare": true, 00:06:49.031 "compare_and_write": true, 00:06:49.031 "abort": true, 00:06:49.031 "seek_hole": false, 00:06:49.031 "seek_data": false, 00:06:49.031 "copy": true, 00:06:49.031 "nvme_iov_md": false 00:06:49.031 }, 00:06:49.031 "memory_domains": [ 00:06:49.031 { 00:06:49.031 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:06:49.031 "dma_device_type": 0 00:06:49.031 } 00:06:49.031 ], 00:06:49.031 "driver_specific": { 00:06:49.031 "nvme": [ 00:06:49.031 { 00:06:49.031 "trid": { 00:06:49.031 "trtype": "RDMA", 00:06:49.031 "adrfam": "IPv4", 00:06:49.031 "traddr": "192.168.100.8", 00:06:49.031 "trsvcid": "4420", 00:06:49.031 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:49.031 }, 00:06:49.031 "ctrlr_data": { 00:06:49.031 "cntlid": 1, 00:06:49.031 "vendor_id": "0x8086", 00:06:49.031 "model_number": "SPDK bdev Controller", 00:06:49.031 "serial_number": "SPDK0", 00:06:49.031 "firmware_revision": "25.01", 00:06:49.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:49.031 "oacs": { 00:06:49.031 "security": 0, 00:06:49.031 "format": 0, 00:06:49.031 "firmware": 0, 00:06:49.031 "ns_manage": 0 00:06:49.031 }, 00:06:49.031 "multi_ctrlr": true, 00:06:49.031 "ana_reporting": false 00:06:49.031 }, 00:06:49.031 "vs": { 00:06:49.031 "nvme_version": "1.3" 00:06:49.031 }, 00:06:49.031 "ns_data": { 00:06:49.031 "id": 1, 00:06:49.031 "can_share": true 00:06:49.031 } 00:06:49.031 } 00:06:49.031 ], 00:06:49.031 "mp_policy": "active_passive" 00:06:49.031 } 00:06:49.031 } 00:06:49.031 ] 00:06:49.031 16:19:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:49.031 16:19:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3646825 00:06:49.031 16:19:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:49.031 Running I/O for 10 seconds... 00:06:50.406 Latency(us) 00:06:50.406 [2024-12-06T15:19:45.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.406 Nvme0n1 : 1.00 36898.00 144.13 0.00 0.00 0.00 0.00 0.00 00:06:50.406 [2024-12-06T15:19:45.134Z] =================================================================================================================== 00:06:50.406 [2024-12-06T15:19:45.134Z] Total : 36898.00 144.13 0.00 0.00 0.00 0.00 0.00 00:06:50.406 00:06:50.972 16:19:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a2268991-b69e-4e03-bc29-81e3f00808da 00:06:51.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.229 Nvme0n1 : 2.00 37249.00 145.50 0.00 0.00 0.00 0.00 0.00 00:06:51.229 [2024-12-06T15:19:45.957Z] =================================================================================================================== 00:06:51.229 [2024-12-06T15:19:45.957Z] Total : 37249.00 145.50 0.00 0.00 0.00 0.00 0.00 00:06:51.229 00:06:51.229 true 00:06:51.229 16:19:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2268991-b69e-4e03-bc29-81e3f00808da 00:06:51.229 16:19:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:51.502 16:19:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:51.502 16:19:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:51.502 16:19:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3646825 00:06:52.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.070 Nvme0n1 : 3.00 37396.67 146.08 0.00 0.00 0.00 0.00 0.00 00:06:52.070 [2024-12-06T15:19:46.798Z] =================================================================================================================== 00:06:52.070 [2024-12-06T15:19:46.798Z] Total : 37396.67 146.08 0.00 0.00 0.00 0.00 0.00 00:06:52.070 00:06:53.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.006 Nvme0n1 : 4.00 37543.75 146.66 0.00 0.00 0.00 0.00 0.00 00:06:53.006 [2024-12-06T15:19:47.734Z] =================================================================================================================== 00:06:53.006 [2024-12-06T15:19:47.734Z] Total : 37543.75 146.66 0.00 0.00 0.00 0.00 0.00 00:06:53.006 00:06:54.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.383 Nvme0n1 : 5.00 37618.20 146.95 0.00 0.00 0.00 0.00 0.00 00:06:54.383 [2024-12-06T15:19:49.111Z] =================================================================================================================== 00:06:54.383 [2024-12-06T15:19:49.111Z] Total : 37618.20 146.95 0.00 0.00 0.00 0.00 0.00 00:06:54.383 00:06:55.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.032 Nvme0n1 : 6.00 37669.67 147.15 0.00 0.00 0.00 0.00 0.00 00:06:55.032 [2024-12-06T15:19:49.760Z] =================================================================================================================== 00:06:55.032 [2024-12-06T15:19:49.760Z] Total : 37669.67 147.15 0.00 0.00 0.00 0.00 0.00 00:06:55.032 00:06:56.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.017 Nvme0n1 : 7.00 37622.43 146.96 0.00 0.00 0.00 0.00 0.00 00:06:56.017 [2024-12-06T15:19:50.745Z] =================================================================================================================== 00:06:56.017 [2024-12-06T15:19:50.745Z] Total : 37622.43 146.96 0.00 0.00 0.00 0.00 0.00 00:06:56.017 00:06:57.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.395 Nvme0n1 : 8.00 37663.75 147.12 0.00 0.00 0.00 0.00 0.00 00:06:57.395 [2024-12-06T15:19:52.123Z] =================================================================================================================== 00:06:57.395 [2024-12-06T15:19:52.123Z] Total : 37663.75 147.12 0.00 0.00 0.00 0.00 0.00 00:06:57.395 00:06:58.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.332 Nvme0n1 : 9.00 37699.89 147.27 0.00 0.00 0.00 0.00 0.00 00:06:58.332 [2024-12-06T15:19:53.060Z] =================================================================================================================== 00:06:58.332 [2024-12-06T15:19:53.060Z] Total : 37699.89 147.27 0.00 0.00 0.00 0.00 0.00 00:06:58.332 00:06:59.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.270 Nvme0n1 : 10.00 37730.80 147.39 0.00 0.00 0.00 0.00 0.00 00:06:59.270 [2024-12-06T15:19:53.998Z] =================================================================================================================== 00:06:59.270 [2024-12-06T15:19:53.998Z] Total : 37730.80 147.39 0.00 0.00 0.00 0.00 0.00 00:06:59.270 00:06:59.270 00:06:59.270 Latency(us) 00:06:59.270 [2024-12-06T15:19:53.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.270 Nvme0n1 : 10.00 37730.59 147.39 0.00 0.00 3389.62 2318.03 16019.91 00:06:59.270 [2024-12-06T15:19:53.998Z] =================================================================================================================== 00:06:59.270 [2024-12-06T15:19:53.998Z] Total : 37730.59 147.39 0.00 0.00 3389.62 2318.03 16019.91 00:06:59.270 { 00:06:59.270 "results": [ 00:06:59.270 { 00:06:59.270 "job": "Nvme0n1", 00:06:59.270 "core_mask": "0x2", 00:06:59.270 "workload": "randwrite", 00:06:59.270 "status": "finished", 00:06:59.270 "queue_depth": 128, 00:06:59.270 "io_size": 4096, 00:06:59.270 "runtime": 10.002759, 00:06:59.270 "iops": 37730.59013018308, 00:06:59.270 "mibps": 147.38511769602766, 00:06:59.270 "io_failed": 0, 00:06:59.270 "io_timeout": 0, 00:06:59.270 "avg_latency_us": 3389.6192078130966, 00:06:59.270 "min_latency_us": 2318.0325925925927, 00:06:59.271 "max_latency_us": 16019.91111111111 00:06:59.271 } 00:06:59.271 ], 00:06:59.271 "core_count": 1 00:06:59.271 } 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3646658 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3646658 ']' 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3646658 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3646658 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3646658' 00:06:59.271 killing process with pid 3646658 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3646658 00:06:59.271 Received shutdown signal, test time was about 10.000000 seconds 00:06:59.271 00:06:59.271 Latency(us) 00:06:59.271 [2024-12-06T15:19:53.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.271 [2024-12-06T15:19:53.999Z] =================================================================================================================== 00:06:59.271 [2024-12-06T15:19:53.999Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3646658 00:06:59.271 16:19:53 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:59.530 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:59.789 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2268991-b69e-4e03-bc29-81e3f00808da 00:06:59.789 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:59.789 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:59.789 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:59.789 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:00.048 [2024-12-06 16:19:54.664208] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2268991-b69e-4e03-bc29-81e3f00808da 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2268991-b69e-4e03-bc29-81e3f00808da 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:00.048 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2268991-b69e-4e03-bc29-81e3f00808da 00:07:00.307 request: 00:07:00.307 { 00:07:00.307 "uuid": "a2268991-b69e-4e03-bc29-81e3f00808da", 00:07:00.307 "method": "bdev_lvol_get_lvstores", 00:07:00.307 "req_id": 1 00:07:00.307 } 00:07:00.307 Got JSON-RPC error response 00:07:00.307 response: 00:07:00.307 { 00:07:00.307 "code": -19, 00:07:00.307 "message": "No such device" 00:07:00.307 } 00:07:00.307 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:00.307 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.307 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.307 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.307 16:19:54 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:00.566 aio_bdev 00:07:00.566 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1b1e8df4-0508-4887-b480-aad46a46590f 00:07:00.566 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1b1e8df4-0508-4887-b480-aad46a46590f 00:07:00.566 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:00.566 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:00.566 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:00.566 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:00.566 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:00.566 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1b1e8df4-0508-4887-b480-aad46a46590f -t 2000 00:07:00.824 [ 00:07:00.824 { 00:07:00.824 "name": "1b1e8df4-0508-4887-b480-aad46a46590f", 00:07:00.824 "aliases": [ 00:07:00.824 "lvs/lvol" 00:07:00.824 ], 00:07:00.824 "product_name": "Logical Volume", 00:07:00.824 "block_size": 4096, 00:07:00.824 "num_blocks": 38912, 00:07:00.824 "uuid": "1b1e8df4-0508-4887-b480-aad46a46590f", 00:07:00.824 "assigned_rate_limits": { 00:07:00.824 "rw_ios_per_sec": 0, 00:07:00.824 "rw_mbytes_per_sec": 0, 00:07:00.824 "r_mbytes_per_sec": 0, 00:07:00.824 "w_mbytes_per_sec": 0 00:07:00.824 }, 00:07:00.824 "claimed": false, 00:07:00.824 "zoned": false, 00:07:00.824 "supported_io_types": { 00:07:00.824 "read": true, 00:07:00.824 "write": true, 00:07:00.824 "unmap": true, 00:07:00.824 "flush": false, 00:07:00.824 "reset": true, 00:07:00.824 "nvme_admin": false, 00:07:00.824 "nvme_io": false, 00:07:00.824 "nvme_io_md": false, 00:07:00.824 "write_zeroes": true, 00:07:00.824 "zcopy": false, 00:07:00.824 "get_zone_info": false, 00:07:00.824 "zone_management": false, 00:07:00.824 "zone_append": false, 00:07:00.824 "compare": false, 00:07:00.824 "compare_and_write": false, 00:07:00.824 "abort": false, 00:07:00.824 "seek_hole": true, 00:07:00.824 "seek_data": true, 00:07:00.824 "copy": false, 00:07:00.824 "nvme_iov_md": false 00:07:00.824 }, 00:07:00.824 "driver_specific": { 00:07:00.824 "lvol": { 00:07:00.824 "lvol_store_uuid": "a2268991-b69e-4e03-bc29-81e3f00808da", 00:07:00.824 "base_bdev": "aio_bdev", 00:07:00.824 "thin_provision": false, 00:07:00.824 "num_allocated_clusters": 38, 00:07:00.824 "snapshot": false, 00:07:00.824 "clone": false, 00:07:00.824 "esnap_clone": false 00:07:00.824 } 00:07:00.824 } 00:07:00.824 } 00:07:00.824 ] 00:07:00.824 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:00.824 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2268991-b69e-4e03-bc29-81e3f00808da 00:07:00.824 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:01.083 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:01.083 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:01.083 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a2268991-b69e-4e03-bc29-81e3f00808da 00:07:01.083 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:01.083 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1b1e8df4-0508-4887-b480-aad46a46590f 00:07:01.342 16:19:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a2268991-b69e-4e03-bc29-81e3f00808da 00:07:01.620 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:01.620 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:01.620 00:07:01.620 real 0m15.098s 00:07:01.620 user 0m15.052s 00:07:01.620 sys 0m0.954s 00:07:01.620 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.620 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:01.620 ************************************ 00:07:01.620 END TEST lvs_grow_clean 00:07:01.620 ************************************ 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:01.879 ************************************ 00:07:01.879 START TEST lvs_grow_dirty 00:07:01.879 ************************************ 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:01.879 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:01.880 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:01.880 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:01.880 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:02.138 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:02.138 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:02.138 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:02.138 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:02.138 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:02.396 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:02.396 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:02.396 16:19:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 406b9005-67d7-461e-b184-9bcf2ec2553a lvol 150 00:07:02.655 16:19:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=603b935e-8997-42cf-8362-090314f548f2 00:07:02.655 16:19:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.655 16:19:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:02.655 [2024-12-06 16:19:57.294653] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:02.655 [2024-12-06 16:19:57.294707] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:02.655 true 00:07:02.655 16:19:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:02.655 16:19:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:02.913 16:19:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:02.913 16:19:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:03.172 16:19:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 603b935e-8997-42cf-8362-090314f548f2 00:07:03.172 16:19:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:03.430 [2024-12-06 16:19:57.972806] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:03.430 16:19:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:03.430 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3649614 00:07:03.430 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:03.430 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:03.430 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3649614 /var/tmp/bdevperf.sock 00:07:03.430 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3649614 ']' 00:07:03.430 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:03.430 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.430 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:03.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:03.430 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.430 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:03.699 [2024-12-06 16:19:58.190568] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:07:03.699 [2024-12-06 16:19:58.190614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649614 ] 00:07:03.699 [2024-12-06 16:19:58.248142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.699 [2024-12-06 16:19:58.285112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.699 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.699 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:03.699 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:03.957 Nvme0n1 00:07:03.957 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:04.216 [ 00:07:04.216 { 00:07:04.216 "name": "Nvme0n1", 00:07:04.216 "aliases": [ 00:07:04.216 "603b935e-8997-42cf-8362-090314f548f2" 00:07:04.216 ], 00:07:04.216 "product_name": "NVMe disk", 00:07:04.216 "block_size": 4096, 00:07:04.216 "num_blocks": 38912, 00:07:04.216 "uuid": "603b935e-8997-42cf-8362-090314f548f2", 00:07:04.216 "numa_id": 0, 00:07:04.216 "assigned_rate_limits": { 00:07:04.216 "rw_ios_per_sec": 0, 00:07:04.216 "rw_mbytes_per_sec": 0, 00:07:04.216 "r_mbytes_per_sec": 0, 00:07:04.216 "w_mbytes_per_sec": 0 00:07:04.216 }, 00:07:04.216 "claimed": false, 00:07:04.216 "zoned": false, 00:07:04.216 "supported_io_types": { 00:07:04.216 "read": true, 00:07:04.216 "write": true, 00:07:04.216 "unmap": true, 00:07:04.216 "flush": true, 00:07:04.216 "reset": true, 00:07:04.216 "nvme_admin": true, 00:07:04.216 "nvme_io": true, 00:07:04.216 "nvme_io_md": false, 00:07:04.216 "write_zeroes": true, 00:07:04.216 "zcopy": false, 00:07:04.216 "get_zone_info": false, 00:07:04.216 "zone_management": false, 00:07:04.216 "zone_append": false, 00:07:04.216 "compare": true, 00:07:04.216 "compare_and_write": true, 00:07:04.216 "abort": true, 00:07:04.216 "seek_hole": false, 00:07:04.216 "seek_data": false, 00:07:04.216 "copy": true, 00:07:04.216 "nvme_iov_md": false 00:07:04.216 }, 00:07:04.216 "memory_domains": [ 00:07:04.216 { 00:07:04.216 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:04.216 "dma_device_type": 0 00:07:04.216 } 00:07:04.216 ], 00:07:04.216 "driver_specific": { 00:07:04.216 "nvme": [ 00:07:04.216 { 00:07:04.216 "trid": { 00:07:04.216 "trtype": "RDMA", 00:07:04.216 "adrfam": "IPv4", 00:07:04.216 "traddr": "192.168.100.8", 00:07:04.216 "trsvcid": "4420", 00:07:04.216 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:04.216 }, 00:07:04.216 "ctrlr_data": { 00:07:04.216 "cntlid": 1, 00:07:04.216 "vendor_id": "0x8086", 00:07:04.216 "model_number": "SPDK bdev Controller", 00:07:04.216 "serial_number": "SPDK0", 00:07:04.217 "firmware_revision": "25.01", 00:07:04.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:04.217 "oacs": { 00:07:04.217 "security": 0, 00:07:04.217 "format": 0, 00:07:04.217 "firmware": 0, 00:07:04.217 "ns_manage": 0 00:07:04.217 }, 00:07:04.217 "multi_ctrlr": true, 00:07:04.217 "ana_reporting": false 00:07:04.217 }, 00:07:04.217 "vs": { 00:07:04.217 "nvme_version": "1.3" 00:07:04.217 }, 00:07:04.217 "ns_data": { 00:07:04.217 "id": 1, 00:07:04.217 "can_share": true 00:07:04.217 } 00:07:04.217 } 00:07:04.217 ], 00:07:04.217 "mp_policy": "active_passive" 00:07:04.217 } 00:07:04.217 } 00:07:04.217 ] 00:07:04.217 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3649621 00:07:04.217 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:04.217 16:19:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:04.217 Running I/O for 10 seconds... 00:07:05.591 Latency(us) 00:07:05.591 [2024-12-06T15:20:00.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.591 Nvme0n1 : 1.00 36898.00 144.13 0.00 0.00 0.00 0.00 0.00 00:07:05.591 [2024-12-06T15:20:00.319Z] =================================================================================================================== 00:07:05.591 [2024-12-06T15:20:00.319Z] Total : 36898.00 144.13 0.00 0.00 0.00 0.00 0.00 00:07:05.591 00:07:06.158 16:20:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:06.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.416 Nvme0n1 : 2.00 36895.50 144.12 0.00 0.00 0.00 0.00 0.00 00:07:06.416 [2024-12-06T15:20:01.144Z] =================================================================================================================== 00:07:06.416 [2024-12-06T15:20:01.144Z] Total : 36895.50 144.12 0.00 0.00 0.00 0.00 0.00 00:07:06.416 00:07:06.416 true 00:07:06.416 16:20:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:06.416 16:20:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:06.674 16:20:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:06.674 16:20:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:06.674 16:20:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3649621 00:07:07.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.240 Nvme0n1 : 3.00 37152.67 145.13 0.00 0.00 0.00 0.00 0.00 00:07:07.240 [2024-12-06T15:20:01.968Z] =================================================================================================================== 00:07:07.240 [2024-12-06T15:20:01.968Z] Total : 37152.67 145.13 0.00 0.00 0.00 0.00 0.00 00:07:07.240 00:07:08.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.618 Nvme0n1 : 4.00 37344.50 145.88 0.00 0.00 0.00 0.00 0.00 00:07:08.618 [2024-12-06T15:20:03.346Z] =================================================================================================================== 00:07:08.618 [2024-12-06T15:20:03.346Z] Total : 37344.50 145.88 0.00 0.00 0.00 0.00 0.00 00:07:08.618 00:07:09.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.553 Nvme0n1 : 5.00 37459.60 146.33 0.00 0.00 0.00 0.00 0.00 00:07:09.553 [2024-12-06T15:20:04.281Z] =================================================================================================================== 00:07:09.553 [2024-12-06T15:20:04.281Z] Total : 37459.60 146.33 0.00 0.00 0.00 0.00 0.00 00:07:09.553 00:07:10.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.490 Nvme0n1 : 6.00 37551.67 146.69 0.00 0.00 0.00 0.00 0.00 00:07:10.490 [2024-12-06T15:20:05.218Z] =================================================================================================================== 00:07:10.490 [2024-12-06T15:20:05.218Z] Total : 37551.67 146.69 0.00 0.00 0.00 0.00 0.00 00:07:10.490 00:07:11.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.425 Nvme0n1 : 7.00 37609.29 146.91 0.00 0.00 0.00 0.00 0.00 00:07:11.425 [2024-12-06T15:20:06.153Z] =================================================================================================================== 00:07:11.425 [2024-12-06T15:20:06.153Z] Total : 37609.29 146.91 0.00 0.00 0.00 0.00 0.00 00:07:11.425 00:07:12.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.360 Nvme0n1 : 8.00 37656.00 147.09 0.00 0.00 0.00 0.00 0.00 00:07:12.360 [2024-12-06T15:20:07.088Z] =================================================================================================================== 00:07:12.360 [2024-12-06T15:20:07.088Z] Total : 37656.00 147.09 0.00 0.00 0.00 0.00 0.00 00:07:12.360 00:07:13.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.296 Nvme0n1 : 9.00 37696.22 147.25 0.00 0.00 0.00 0.00 0.00 00:07:13.296 [2024-12-06T15:20:08.024Z] =================================================================================================================== 00:07:13.296 [2024-12-06T15:20:08.024Z] Total : 37696.22 147.25 0.00 0.00 0.00 0.00 0.00 00:07:13.296 00:07:14.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.234 Nvme0n1 : 10.00 37724.40 147.36 0.00 0.00 0.00 0.00 0.00 00:07:14.234 [2024-12-06T15:20:08.962Z] =================================================================================================================== 00:07:14.234 [2024-12-06T15:20:08.962Z] Total : 37724.40 147.36 0.00 0.00 0.00 0.00 0.00 00:07:14.234 00:07:14.234 00:07:14.234 Latency(us) 00:07:14.234 [2024-12-06T15:20:08.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.234 Nvme0n1 : 10.00 37725.52 147.37 0.00 0.00 3390.25 2402.99 12087.75 00:07:14.234 [2024-12-06T15:20:08.962Z] =================================================================================================================== 00:07:14.234 [2024-12-06T15:20:08.962Z] Total : 37725.52 147.37 0.00 0.00 3390.25 2402.99 12087.75 00:07:14.234 { 00:07:14.234 "results": [ 00:07:14.234 { 00:07:14.234 "job": "Nvme0n1", 00:07:14.234 "core_mask": "0x2", 00:07:14.234 "workload": "randwrite", 00:07:14.234 "status": "finished", 00:07:14.234 "queue_depth": 128, 00:07:14.234 "io_size": 4096, 00:07:14.234 "runtime": 10.003096, 00:07:14.234 "iops": 37725.520178952596, 00:07:14.234 "mibps": 147.36531319903358, 00:07:14.234 "io_failed": 0, 00:07:14.234 "io_timeout": 0, 00:07:14.234 "avg_latency_us": 3390.2528006356633, 00:07:14.234 "min_latency_us": 2402.9866666666667, 00:07:14.234 "max_latency_us": 12087.75111111111 00:07:14.234 } 00:07:14.234 ], 00:07:14.234 "core_count": 1 00:07:14.234 } 00:07:14.494 16:20:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3649614 00:07:14.494 16:20:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3649614 ']' 00:07:14.494 16:20:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3649614 00:07:14.494 16:20:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:14.494 16:20:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.494 16:20:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3649614 00:07:14.494 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:14.494 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:14.494 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3649614' 00:07:14.494 killing process with pid 3649614 00:07:14.494 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3649614 00:07:14.494 Received shutdown signal, test time was about 10.000000 seconds 00:07:14.494 00:07:14.494 Latency(us) 00:07:14.494 [2024-12-06T15:20:09.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.494 [2024-12-06T15:20:09.222Z] =================================================================================================================== 00:07:14.494 [2024-12-06T15:20:09.222Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:14.494 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3649614 00:07:14.494 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:14.753 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:15.012 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:15.012 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3646190 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3646190 00:07:15.271 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3646190 Killed "${NVMF_APP[@]}" "$@" 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3651733 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3651733 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3651733 ']' 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.271 16:20:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.271 [2024-12-06 16:20:09.828538] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:07:15.271 [2024-12-06 16:20:09.828587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.271 [2024-12-06 16:20:09.887295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.271 [2024-12-06 16:20:09.924553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.272 [2024-12-06 16:20:09.924591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.272 [2024-12-06 16:20:09.924597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.272 [2024-12-06 16:20:09.924603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.272 [2024-12-06 16:20:09.924607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.272 [2024-12-06 16:20:09.925075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.530 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.530 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:15.530 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.530 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.530 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:15.530 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.530 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:15.530 [2024-12-06 16:20:10.210276] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:15.530 [2024-12-06 16:20:10.210368] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:15.530 [2024-12-06 16:20:10.210401] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:15.530 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:15.531 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 603b935e-8997-42cf-8362-090314f548f2 00:07:15.531 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=603b935e-8997-42cf-8362-090314f548f2 00:07:15.531 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.531 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:15.531 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.531 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.531 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:15.790 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 603b935e-8997-42cf-8362-090314f548f2 -t 2000 00:07:16.048 [ 00:07:16.048 { 00:07:16.048 "name": "603b935e-8997-42cf-8362-090314f548f2", 00:07:16.048 "aliases": [ 00:07:16.048 "lvs/lvol" 00:07:16.048 ], 00:07:16.048 "product_name": "Logical Volume", 00:07:16.048 "block_size": 4096, 00:07:16.048 "num_blocks": 38912, 00:07:16.048 "uuid": "603b935e-8997-42cf-8362-090314f548f2", 00:07:16.048 "assigned_rate_limits": { 00:07:16.048 "rw_ios_per_sec": 0, 00:07:16.048 "rw_mbytes_per_sec": 0, 00:07:16.048 "r_mbytes_per_sec": 0, 00:07:16.048 "w_mbytes_per_sec": 0 00:07:16.048 }, 00:07:16.048 "claimed": false, 00:07:16.048 "zoned": false, 00:07:16.048 "supported_io_types": { 00:07:16.048 "read": true, 00:07:16.048 "write": true, 00:07:16.048 "unmap": true, 00:07:16.048 "flush": false, 00:07:16.048 "reset": true, 00:07:16.048 "nvme_admin": false, 00:07:16.048 "nvme_io": false, 00:07:16.048 "nvme_io_md": false, 00:07:16.048 "write_zeroes": true, 00:07:16.048 "zcopy": false, 00:07:16.048 "get_zone_info": false, 00:07:16.048 "zone_management": false, 00:07:16.048 "zone_append": false, 00:07:16.048 "compare": false, 00:07:16.048 "compare_and_write": false, 00:07:16.048 "abort": false, 00:07:16.048 "seek_hole": true, 00:07:16.048 "seek_data": true, 00:07:16.048 "copy": false, 00:07:16.048 "nvme_iov_md": false 00:07:16.048 }, 00:07:16.048 "driver_specific": { 00:07:16.048 "lvol": { 00:07:16.048 "lvol_store_uuid": "406b9005-67d7-461e-b184-9bcf2ec2553a", 00:07:16.048 "base_bdev": "aio_bdev", 00:07:16.048 "thin_provision": false, 00:07:16.048 "num_allocated_clusters": 38, 00:07:16.048 "snapshot": false, 00:07:16.048 "clone": false, 00:07:16.048 "esnap_clone": false 00:07:16.048 } 00:07:16.048 } 00:07:16.048 } 00:07:16.048 ] 00:07:16.048 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:16.048 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:16.048 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:16.048 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:16.048 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:16.048 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:16.306 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:16.306 16:20:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:16.564 [2024-12-06 16:20:11.070912] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:16.564 request: 00:07:16.564 { 00:07:16.564 "uuid": "406b9005-67d7-461e-b184-9bcf2ec2553a", 00:07:16.564 "method": "bdev_lvol_get_lvstores", 00:07:16.564 "req_id": 1 00:07:16.564 } 00:07:16.564 Got JSON-RPC error response 00:07:16.564 response: 00:07:16.564 { 00:07:16.564 "code": -19, 00:07:16.564 "message": "No such device" 00:07:16.564 } 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.564 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:16.823 aio_bdev 00:07:16.823 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 603b935e-8997-42cf-8362-090314f548f2 00:07:16.823 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=603b935e-8997-42cf-8362-090314f548f2 00:07:16.823 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:16.823 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:16.823 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:16.823 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:16.823 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:17.081 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 603b935e-8997-42cf-8362-090314f548f2 -t 2000 00:07:17.081 [ 00:07:17.081 { 00:07:17.081 "name": "603b935e-8997-42cf-8362-090314f548f2", 00:07:17.081 "aliases": [ 00:07:17.081 "lvs/lvol" 00:07:17.081 ], 00:07:17.081 "product_name": "Logical Volume", 00:07:17.081 "block_size": 4096, 00:07:17.081 "num_blocks": 38912, 00:07:17.081 "uuid": "603b935e-8997-42cf-8362-090314f548f2", 00:07:17.081 "assigned_rate_limits": { 00:07:17.081 "rw_ios_per_sec": 0, 00:07:17.081 "rw_mbytes_per_sec": 0, 00:07:17.081 "r_mbytes_per_sec": 0, 00:07:17.081 "w_mbytes_per_sec": 0 00:07:17.081 }, 00:07:17.081 "claimed": false, 00:07:17.081 "zoned": false, 00:07:17.081 "supported_io_types": { 00:07:17.081 "read": true, 00:07:17.081 "write": true, 00:07:17.081 "unmap": true, 00:07:17.081 "flush": false, 00:07:17.081 "reset": true, 00:07:17.081 "nvme_admin": false, 00:07:17.081 "nvme_io": false, 00:07:17.081 "nvme_io_md": false, 00:07:17.081 "write_zeroes": true, 00:07:17.081 "zcopy": false, 00:07:17.081 "get_zone_info": false, 00:07:17.081 "zone_management": false, 00:07:17.081 "zone_append": false, 00:07:17.081 "compare": false, 00:07:17.081 "compare_and_write": false, 00:07:17.081 "abort": false, 00:07:17.081 "seek_hole": true, 00:07:17.081 "seek_data": true, 00:07:17.081 "copy": false, 00:07:17.081 "nvme_iov_md": false 00:07:17.081 }, 00:07:17.081 "driver_specific": { 00:07:17.081 "lvol": { 00:07:17.081 "lvol_store_uuid": "406b9005-67d7-461e-b184-9bcf2ec2553a", 00:07:17.081 "base_bdev": "aio_bdev", 00:07:17.081 "thin_provision": false, 00:07:17.081 "num_allocated_clusters": 38, 00:07:17.081 "snapshot": false, 00:07:17.081 "clone": false, 00:07:17.081 "esnap_clone": false 00:07:17.081 } 00:07:17.081 } 00:07:17.081 } 00:07:17.081 ] 00:07:17.081 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:17.081 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:17.081 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:17.341 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:17.341 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:17.341 16:20:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:17.600 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:17.600 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 603b935e-8997-42cf-8362-090314f548f2 00:07:17.600 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 406b9005-67d7-461e-b184-9bcf2ec2553a 00:07:17.858 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:18.116 00:07:18.116 real 0m16.285s 00:07:18.116 user 0m43.287s 00:07:18.116 sys 0m2.660s 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:18.116 ************************************ 00:07:18.116 END TEST lvs_grow_dirty 00:07:18.116 ************************************ 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:18.116 nvmf_trace.0 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:18.116 rmmod nvme_rdma 00:07:18.116 rmmod nvme_fabrics 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3651733 ']' 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3651733 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3651733 ']' 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3651733 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.116 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3651733 00:07:18.375 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.375 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.375 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3651733' 00:07:18.375 killing process with pid 3651733 00:07:18.375 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3651733 00:07:18.375 16:20:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3651733 00:07:18.375 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:18.375 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:18.375 00:07:18.375 real 0m38.262s 00:07:18.375 user 1m3.524s 00:07:18.375 sys 0m8.386s 00:07:18.375 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.375 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.375 ************************************ 00:07:18.375 END TEST nvmf_lvs_grow 00:07:18.375 ************************************ 00:07:18.375 16:20:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:07:18.375 16:20:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.375 16:20:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.375 16:20:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.375 ************************************ 00:07:18.375 START TEST nvmf_bdev_io_wait 00:07:18.375 ************************************ 00:07:18.375 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:07:18.635 * Looking for test storage... 00:07:18.635 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.635 --rc genhtml_branch_coverage=1 00:07:18.635 --rc genhtml_function_coverage=1 00:07:18.635 --rc genhtml_legend=1 00:07:18.635 --rc geninfo_all_blocks=1 00:07:18.635 --rc geninfo_unexecuted_blocks=1 00:07:18.635 00:07:18.635 ' 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.635 --rc genhtml_branch_coverage=1 00:07:18.635 --rc genhtml_function_coverage=1 00:07:18.635 --rc genhtml_legend=1 00:07:18.635 --rc geninfo_all_blocks=1 00:07:18.635 --rc geninfo_unexecuted_blocks=1 00:07:18.635 00:07:18.635 ' 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.635 --rc genhtml_branch_coverage=1 00:07:18.635 --rc genhtml_function_coverage=1 00:07:18.635 --rc genhtml_legend=1 00:07:18.635 --rc geninfo_all_blocks=1 00:07:18.635 --rc geninfo_unexecuted_blocks=1 00:07:18.635 00:07:18.635 ' 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.635 --rc genhtml_branch_coverage=1 00:07:18.635 --rc genhtml_function_coverage=1 00:07:18.635 --rc genhtml_legend=1 00:07:18.635 --rc geninfo_all_blocks=1 00:07:18.635 --rc geninfo_unexecuted_blocks=1 00:07:18.635 00:07:18.635 ' 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.635 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.636 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.636 16:20:13 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.219 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:25.220 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:25.220 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:25.220 Found net devices under 0000:18:00.0: mlx_0_0 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:25.220 Found net devices under 0000:18:00.1: mlx_0_1 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:25.220 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:25.221 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:25.221 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:25.221 altname enp24s0f0np0 00:07:25.221 altname ens785f0np0 00:07:25.221 inet 192.168.100.8/24 scope global mlx_0_0 00:07:25.221 valid_lft forever preferred_lft forever 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:25.221 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:25.221 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:25.221 altname enp24s0f1np1 00:07:25.221 altname ens785f1np1 00:07:25.221 inet 192.168.100.9/24 scope global mlx_0_1 00:07:25.221 valid_lft forever preferred_lft forever 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:25.221 192.168.100.9' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:25.221 192.168.100.9' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:25.221 192.168.100.9' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3655583 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3655583 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3655583 ']' 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.221 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.222 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.222 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.222 16:20:18 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.222 [2024-12-06 16:20:19.038088] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:07:25.222 [2024-12-06 16:20:19.038143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.222 [2024-12-06 16:20:19.098832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.222 [2024-12-06 16:20:19.138979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.222 [2024-12-06 16:20:19.139015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.222 [2024-12-06 16:20:19.139022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.222 [2024-12-06 16:20:19.139027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.222 [2024-12-06 16:20:19.139032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.222 [2024-12-06 16:20:19.140390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.222 [2024-12-06 16:20:19.140424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.222 [2024-12-06 16:20:19.140510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.222 [2024-12-06 16:20:19.140512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.222 [2024-12-06 16:20:19.310027] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d84110/0x1d88600) succeed. 00:07:25.222 [2024-12-06 16:20:19.318833] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d857a0/0x1dc9ca0) succeed. 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.222 Malloc0 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:25.222 [2024-12-06 16:20:19.488405] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3655844 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3655847 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:25.222 { 00:07:25.222 "params": { 00:07:25.222 "name": "Nvme$subsystem", 00:07:25.222 "trtype": "$TEST_TRANSPORT", 00:07:25.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.222 "adrfam": "ipv4", 00:07:25.222 "trsvcid": "$NVMF_PORT", 00:07:25.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.222 "hdgst": ${hdgst:-false}, 00:07:25.222 "ddgst": ${ddgst:-false} 00:07:25.222 }, 00:07:25.222 "method": "bdev_nvme_attach_controller" 00:07:25.222 } 00:07:25.222 EOF 00:07:25.222 )") 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3655850 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:25.222 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:25.222 { 00:07:25.222 "params": { 00:07:25.222 "name": "Nvme$subsystem", 00:07:25.222 "trtype": "$TEST_TRANSPORT", 00:07:25.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.222 "adrfam": "ipv4", 00:07:25.223 "trsvcid": "$NVMF_PORT", 00:07:25.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.223 "hdgst": ${hdgst:-false}, 00:07:25.223 "ddgst": ${ddgst:-false} 00:07:25.223 }, 00:07:25.223 "method": "bdev_nvme_attach_controller" 00:07:25.223 } 00:07:25.223 EOF 00:07:25.223 )") 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3655854 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:25.223 { 00:07:25.223 "params": { 00:07:25.223 "name": "Nvme$subsystem", 00:07:25.223 "trtype": "$TEST_TRANSPORT", 00:07:25.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.223 "adrfam": "ipv4", 00:07:25.223 "trsvcid": "$NVMF_PORT", 00:07:25.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.223 "hdgst": ${hdgst:-false}, 00:07:25.223 "ddgst": ${ddgst:-false} 00:07:25.223 }, 00:07:25.223 "method": "bdev_nvme_attach_controller" 00:07:25.223 } 00:07:25.223 EOF 00:07:25.223 )") 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:25.223 { 00:07:25.223 "params": { 00:07:25.223 "name": "Nvme$subsystem", 00:07:25.223 "trtype": "$TEST_TRANSPORT", 00:07:25.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:25.223 "adrfam": "ipv4", 00:07:25.223 "trsvcid": "$NVMF_PORT", 00:07:25.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:25.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:25.223 "hdgst": ${hdgst:-false}, 00:07:25.223 "ddgst": ${ddgst:-false} 00:07:25.223 }, 00:07:25.223 "method": "bdev_nvme_attach_controller" 00:07:25.223 } 00:07:25.223 EOF 00:07:25.223 )") 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3655844 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:25.223 "params": { 00:07:25.223 "name": "Nvme1", 00:07:25.223 "trtype": "rdma", 00:07:25.223 "traddr": "192.168.100.8", 00:07:25.223 "adrfam": "ipv4", 00:07:25.223 "trsvcid": "4420", 00:07:25.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:25.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:25.223 "hdgst": false, 00:07:25.223 "ddgst": false 00:07:25.223 }, 00:07:25.223 "method": "bdev_nvme_attach_controller" 00:07:25.223 }' 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:25.223 "params": { 00:07:25.223 "name": "Nvme1", 00:07:25.223 "trtype": "rdma", 00:07:25.223 "traddr": "192.168.100.8", 00:07:25.223 "adrfam": "ipv4", 00:07:25.223 "trsvcid": "4420", 00:07:25.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:25.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:25.223 "hdgst": false, 00:07:25.223 "ddgst": false 00:07:25.223 }, 00:07:25.223 "method": "bdev_nvme_attach_controller" 00:07:25.223 }' 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:25.223 "params": { 00:07:25.223 "name": "Nvme1", 00:07:25.223 "trtype": "rdma", 00:07:25.223 "traddr": "192.168.100.8", 00:07:25.223 "adrfam": "ipv4", 00:07:25.223 "trsvcid": "4420", 00:07:25.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:25.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:25.223 "hdgst": false, 00:07:25.223 "ddgst": false 00:07:25.223 }, 00:07:25.223 "method": "bdev_nvme_attach_controller" 00:07:25.223 }' 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:25.223 16:20:19 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:25.223 "params": { 00:07:25.223 "name": "Nvme1", 00:07:25.223 "trtype": "rdma", 00:07:25.223 "traddr": "192.168.100.8", 00:07:25.223 "adrfam": "ipv4", 00:07:25.223 "trsvcid": "4420", 00:07:25.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:25.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:25.223 "hdgst": false, 00:07:25.223 "ddgst": false 00:07:25.223 }, 00:07:25.224 "method": "bdev_nvme_attach_controller" 00:07:25.224 }' 00:07:25.224 [2024-12-06 16:20:19.536741] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:07:25.224 [2024-12-06 16:20:19.536787] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:25.224 [2024-12-06 16:20:19.537930] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:07:25.224 [2024-12-06 16:20:19.537967] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:25.224 [2024-12-06 16:20:19.540173] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:07:25.224 [2024-12-06 16:20:19.540215] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:25.224 [2024-12-06 16:20:19.541996] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:07:25.224 [2024-12-06 16:20:19.542034] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:25.224 [2024-12-06 16:20:19.721005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.224 [2024-12-06 16:20:19.761079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:25.224 [2024-12-06 16:20:19.808765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.224 [2024-12-06 16:20:19.860193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.224 [2024-12-06 16:20:19.869112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:25.224 [2024-12-06 16:20:19.900881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:25.224 [2024-12-06 16:20:19.913357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.483 [2024-12-06 16:20:19.953628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:25.483 Running I/O for 1 seconds... 00:07:25.483 Running I/O for 1 seconds... 00:07:25.483 Running I/O for 1 seconds... 00:07:25.483 Running I/O for 1 seconds... 00:07:26.423 18630.00 IOPS, 72.77 MiB/s [2024-12-06T15:20:21.151Z] 17957.00 IOPS, 70.14 MiB/s 00:07:26.423 Latency(us) 00:07:26.423 [2024-12-06T15:20:21.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.423 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:26.423 Nvme1n1 : 1.01 18665.71 72.91 0.00 0.00 6836.96 4369.07 16117.00 00:07:26.423 [2024-12-06T15:20:21.151Z] =================================================================================================================== 00:07:26.423 [2024-12-06T15:20:21.151Z] Total : 18665.71 72.91 0.00 0.00 6836.96 4369.07 16117.00 00:07:26.423 00:07:26.423 Latency(us) 00:07:26.423 [2024-12-06T15:20:21.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.423 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:26.423 Nvme1n1 : 1.01 17998.77 70.31 0.00 0.00 7090.82 4344.79 16602.45 00:07:26.423 [2024-12-06T15:20:21.151Z] =================================================================================================================== 00:07:26.423 [2024-12-06T15:20:21.151Z] Total : 17998.77 70.31 0.00 0.00 7090.82 4344.79 16602.45 00:07:26.423 15250.00 IOPS, 59.57 MiB/s 00:07:26.423 Latency(us) 00:07:26.423 [2024-12-06T15:20:21.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.423 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:26.423 Nvme1n1 : 1.01 15342.60 59.93 0.00 0.00 8323.16 3106.89 18252.99 00:07:26.423 [2024-12-06T15:20:21.151Z] =================================================================================================================== 00:07:26.423 [2024-12-06T15:20:21.151Z] Total : 15342.60 59.93 0.00 0.00 8323.16 3106.89 18252.99 00:07:26.423 268488.00 IOPS, 1048.78 MiB/s 00:07:26.423 Latency(us) 00:07:26.423 [2024-12-06T15:20:21.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.423 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:26.423 Nvme1n1 : 1.00 268115.01 1047.32 0.00 0.00 474.90 201.77 2002.49 00:07:26.423 [2024-12-06T15:20:21.151Z] =================================================================================================================== 00:07:26.423 [2024-12-06T15:20:21.151Z] Total : 268115.01 1047.32 0.00 0.00 474.90 201.77 2002.49 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3655847 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3655850 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3655854 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:26.683 rmmod nvme_rdma 00:07:26.683 rmmod nvme_fabrics 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3655583 ']' 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3655583 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3655583 ']' 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3655583 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3655583 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3655583' 00:07:26.683 killing process with pid 3655583 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3655583 00:07:26.683 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3655583 00:07:26.943 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:26.943 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:26.943 00:07:26.943 real 0m8.495s 00:07:26.943 user 0m16.539s 00:07:26.943 sys 0m5.475s 00:07:26.943 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.943 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.943 ************************************ 00:07:26.943 END TEST nvmf_bdev_io_wait 00:07:26.943 ************************************ 00:07:26.943 16:20:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:07:26.943 16:20:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.943 16:20:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.943 16:20:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.943 ************************************ 00:07:26.943 START TEST nvmf_queue_depth 00:07:26.943 ************************************ 00:07:26.943 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:07:27.203 * Looking for test storage... 00:07:27.203 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:27.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.203 --rc genhtml_branch_coverage=1 00:07:27.203 --rc genhtml_function_coverage=1 00:07:27.203 --rc genhtml_legend=1 00:07:27.203 --rc geninfo_all_blocks=1 00:07:27.203 --rc geninfo_unexecuted_blocks=1 00:07:27.203 00:07:27.203 ' 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:27.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.203 --rc genhtml_branch_coverage=1 00:07:27.203 --rc genhtml_function_coverage=1 00:07:27.203 --rc genhtml_legend=1 00:07:27.203 --rc geninfo_all_blocks=1 00:07:27.203 --rc geninfo_unexecuted_blocks=1 00:07:27.203 00:07:27.203 ' 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:27.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.203 --rc genhtml_branch_coverage=1 00:07:27.203 --rc genhtml_function_coverage=1 00:07:27.203 --rc genhtml_legend=1 00:07:27.203 --rc geninfo_all_blocks=1 00:07:27.203 --rc geninfo_unexecuted_blocks=1 00:07:27.203 00:07:27.203 ' 00:07:27.203 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:27.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.204 --rc genhtml_branch_coverage=1 00:07:27.204 --rc genhtml_function_coverage=1 00:07:27.204 --rc genhtml_legend=1 00:07:27.204 --rc geninfo_all_blocks=1 00:07:27.204 --rc geninfo_unexecuted_blocks=1 00:07:27.204 00:07:27.204 ' 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.204 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:27.204 16:20:21 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:32.482 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:32.482 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:32.482 Found net devices under 0000:18:00.0: mlx_0_0 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:32.482 Found net devices under 0000:18:00.1: mlx_0_1 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:32.482 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:32.482 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:32.482 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:32.482 altname enp24s0f0np0 00:07:32.482 altname ens785f0np0 00:07:32.482 inet 192.168.100.8/24 scope global mlx_0_0 00:07:32.482 valid_lft forever preferred_lft forever 00:07:32.483 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:32.483 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:32.483 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:32.483 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:32.483 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:32.483 16:20:26 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:32.483 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:32.483 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:32.483 altname enp24s0f1np1 00:07:32.483 altname ens785f1np1 00:07:32.483 inet 192.168.100.9/24 scope global mlx_0_1 00:07:32.483 valid_lft forever preferred_lft forever 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:32.483 192.168.100.9' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:32.483 192.168.100.9' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:32.483 192.168.100.9' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3659407 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3659407 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3659407 ']' 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.483 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:32.483 [2024-12-06 16:20:27.142687] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:07:32.483 [2024-12-06 16:20:27.142728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.483 [2024-12-06 16:20:27.204748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.743 [2024-12-06 16:20:27.242339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.743 [2024-12-06 16:20:27.242373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.743 [2024-12-06 16:20:27.242383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.743 [2024-12-06 16:20:27.242389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.743 [2024-12-06 16:20:27.242394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.743 [2024-12-06 16:20:27.242861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:32.743 [2024-12-06 16:20:27.391812] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17230c0/0x17275b0) succeed. 00:07:32.743 [2024-12-06 16:20:27.399507] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1724570/0x1768c50) succeed. 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:32.743 Malloc0 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.743 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.002 [2024-12-06 16:20:27.475385] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3659427 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3659427 /var/tmp/bdevperf.sock 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3659427 ']' 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:33.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:33.002 [2024-12-06 16:20:27.523470] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:07:33.002 [2024-12-06 16:20:27.523508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659427 ] 00:07:33.002 [2024-12-06 16:20:27.579945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.002 [2024-12-06 16:20:27.617219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.002 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.261 NVMe0n1 00:07:33.261 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.261 16:20:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:33.261 Running I/O for 10 seconds... 00:07:35.572 18181.00 IOPS, 71.02 MiB/s [2024-12-06T15:20:31.234Z] 18432.00 IOPS, 72.00 MiB/s [2024-12-06T15:20:32.169Z] 18432.00 IOPS, 72.00 MiB/s [2024-12-06T15:20:33.104Z] 18550.25 IOPS, 72.46 MiB/s [2024-12-06T15:20:34.040Z] 18636.80 IOPS, 72.80 MiB/s [2024-12-06T15:20:34.978Z] 18612.50 IOPS, 72.71 MiB/s [2024-12-06T15:20:35.920Z] 18680.29 IOPS, 72.97 MiB/s [2024-12-06T15:20:37.299Z] 18688.00 IOPS, 73.00 MiB/s [2024-12-06T15:20:37.962Z] 18693.11 IOPS, 73.02 MiB/s [2024-12-06T15:20:37.962Z] 18722.30 IOPS, 73.13 MiB/s 00:07:43.234 Latency(us) 00:07:43.234 [2024-12-06T15:20:37.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.234 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:43.234 Verification LBA range: start 0x0 length 0x4000 00:07:43.234 NVMe0n1 : 10.04 18740.06 73.20 0.00 0.00 54494.97 17379.18 34564.17 00:07:43.234 [2024-12-06T15:20:37.962Z] =================================================================================================================== 00:07:43.234 [2024-12-06T15:20:37.962Z] Total : 18740.06 73.20 0.00 0.00 54494.97 17379.18 34564.17 00:07:43.234 { 00:07:43.234 "results": [ 00:07:43.234 { 00:07:43.234 "job": "NVMe0n1", 00:07:43.234 "core_mask": "0x1", 00:07:43.234 "workload": "verify", 00:07:43.234 "status": "finished", 00:07:43.234 "verify_range": { 00:07:43.234 "start": 0, 00:07:43.234 "length": 16384 00:07:43.234 }, 00:07:43.234 "queue_depth": 1024, 00:07:43.234 "io_size": 4096, 00:07:43.234 "runtime": 10.043619, 00:07:43.234 "iops": 18740.057742134584, 00:07:43.234 "mibps": 73.20335055521322, 00:07:43.234 "io_failed": 0, 00:07:43.234 "io_timeout": 0, 00:07:43.234 "avg_latency_us": 54494.96920612544, 00:07:43.234 "min_latency_us": 17379.176296296297, 00:07:43.234 "max_latency_us": 34564.171851851854 00:07:43.234 } 00:07:43.234 ], 00:07:43.234 "core_count": 1 00:07:43.234 } 00:07:43.595 16:20:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3659427 00:07:43.595 16:20:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3659427 ']' 00:07:43.595 16:20:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3659427 00:07:43.595 16:20:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:43.595 16:20:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.595 16:20:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3659427 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3659427' 00:07:43.595 killing process with pid 3659427 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3659427 00:07:43.595 Received shutdown signal, test time was about 10.000000 seconds 00:07:43.595 00:07:43.595 Latency(us) 00:07:43.595 [2024-12-06T15:20:38.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.595 [2024-12-06T15:20:38.323Z] =================================================================================================================== 00:07:43.595 [2024-12-06T15:20:38.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3659427 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:43.595 rmmod nvme_rdma 00:07:43.595 rmmod nvme_fabrics 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3659407 ']' 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3659407 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3659407 ']' 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3659407 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.595 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3659407 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3659407' 00:07:43.854 killing process with pid 3659407 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3659407 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3659407 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:43.854 00:07:43.854 real 0m16.871s 00:07:43.854 user 0m23.556s 00:07:43.854 sys 0m4.579s 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:43.854 ************************************ 00:07:43.854 END TEST nvmf_queue_depth 00:07:43.854 ************************************ 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.854 ************************************ 00:07:43.854 START TEST nvmf_target_multipath 00:07:43.854 ************************************ 00:07:43.854 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:07:44.112 * Looking for test storage... 00:07:44.112 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:44.112 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:44.112 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:07:44.112 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.112 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.112 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.112 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.112 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.113 --rc genhtml_branch_coverage=1 00:07:44.113 --rc genhtml_function_coverage=1 00:07:44.113 --rc genhtml_legend=1 00:07:44.113 --rc geninfo_all_blocks=1 00:07:44.113 --rc geninfo_unexecuted_blocks=1 00:07:44.113 00:07:44.113 ' 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.113 --rc genhtml_branch_coverage=1 00:07:44.113 --rc genhtml_function_coverage=1 00:07:44.113 --rc genhtml_legend=1 00:07:44.113 --rc geninfo_all_blocks=1 00:07:44.113 --rc geninfo_unexecuted_blocks=1 00:07:44.113 00:07:44.113 ' 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.113 --rc genhtml_branch_coverage=1 00:07:44.113 --rc genhtml_function_coverage=1 00:07:44.113 --rc genhtml_legend=1 00:07:44.113 --rc geninfo_all_blocks=1 00:07:44.113 --rc geninfo_unexecuted_blocks=1 00:07:44.113 00:07:44.113 ' 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.113 --rc genhtml_branch_coverage=1 00:07:44.113 --rc genhtml_function_coverage=1 00:07:44.113 --rc genhtml_legend=1 00:07:44.113 --rc geninfo_all_blocks=1 00:07:44.113 --rc geninfo_unexecuted_blocks=1 00:07:44.113 00:07:44.113 ' 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.113 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.114 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.114 16:20:38 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:50.680 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.680 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.680 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.680 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.680 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:50.681 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:50.681 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:50.681 Found net devices under 0000:18:00.0: mlx_0_0 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:50.681 Found net devices under 0000:18:00.1: mlx_0_1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:50.681 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:50.681 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:50.681 altname enp24s0f0np0 00:07:50.681 altname ens785f0np0 00:07:50.681 inet 192.168.100.8/24 scope global mlx_0_0 00:07:50.681 valid_lft forever preferred_lft forever 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:50.681 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:50.681 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:50.681 altname enp24s0f1np1 00:07:50.681 altname ens785f1np1 00:07:50.681 inet 192.168.100.9/24 scope global mlx_0_1 00:07:50.681 valid_lft forever preferred_lft forever 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:50.681 192.168.100.9' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:50.681 192.168.100.9' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:50.681 192.168.100.9' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:07:50.681 run this test only with TCP transport for now 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:50.681 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:50.682 rmmod nvme_rdma 00:07:50.682 rmmod nvme_fabrics 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:50.682 00:07:50.682 real 0m5.844s 00:07:50.682 user 0m1.682s 00:07:50.682 sys 0m4.284s 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:50.682 ************************************ 00:07:50.682 END TEST nvmf_target_multipath 00:07:50.682 ************************************ 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.682 ************************************ 00:07:50.682 START TEST nvmf_zcopy 00:07:50.682 ************************************ 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:07:50.682 * Looking for test storage... 00:07:50.682 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.682 --rc genhtml_branch_coverage=1 00:07:50.682 --rc genhtml_function_coverage=1 00:07:50.682 --rc genhtml_legend=1 00:07:50.682 --rc geninfo_all_blocks=1 00:07:50.682 --rc geninfo_unexecuted_blocks=1 00:07:50.682 00:07:50.682 ' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.682 --rc genhtml_branch_coverage=1 00:07:50.682 --rc genhtml_function_coverage=1 00:07:50.682 --rc genhtml_legend=1 00:07:50.682 --rc geninfo_all_blocks=1 00:07:50.682 --rc geninfo_unexecuted_blocks=1 00:07:50.682 00:07:50.682 ' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.682 --rc genhtml_branch_coverage=1 00:07:50.682 --rc genhtml_function_coverage=1 00:07:50.682 --rc genhtml_legend=1 00:07:50.682 --rc geninfo_all_blocks=1 00:07:50.682 --rc geninfo_unexecuted_blocks=1 00:07:50.682 00:07:50.682 ' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.682 --rc genhtml_branch_coverage=1 00:07:50.682 --rc genhtml_function_coverage=1 00:07:50.682 --rc genhtml_legend=1 00:07:50.682 --rc geninfo_all_blocks=1 00:07:50.682 --rc geninfo_unexecuted_blocks=1 00:07:50.682 00:07:50.682 ' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.682 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:50.682 16:20:44 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:55.973 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:55.973 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:55.973 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:55.974 Found net devices under 0000:18:00.0: mlx_0_0 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:55.974 Found net devices under 0000:18:00.1: mlx_0_1 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:55.974 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:55.974 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:55.974 altname enp24s0f0np0 00:07:55.974 altname ens785f0np0 00:07:55.974 inet 192.168.100.8/24 scope global mlx_0_0 00:07:55.974 valid_lft forever preferred_lft forever 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:55.974 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:55.974 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:55.974 altname enp24s0f1np1 00:07:55.974 altname ens785f1np1 00:07:55.974 inet 192.168.100.9/24 scope global mlx_0_1 00:07:55.974 valid_lft forever preferred_lft forever 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:55.974 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:55.975 192.168.100.9' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:55.975 192.168.100.9' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:55.975 192.168.100.9' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3668038 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3668038 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3668038 ']' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:55.975 [2024-12-06 16:20:50.377026] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:07:55.975 [2024-12-06 16:20:50.377073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.975 [2024-12-06 16:20:50.434921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.975 [2024-12-06 16:20:50.472705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.975 [2024-12-06 16:20:50.472735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.975 [2024-12-06 16:20:50.472742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.975 [2024-12-06 16:20:50.472747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.975 [2024-12-06 16:20:50.472752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.975 [2024-12-06 16:20:50.473224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:07:55.975 Unsupported transport: rdma 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:55.975 nvmf_trace.0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:07:55.975 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:55.976 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:55.976 rmmod nvme_rdma 00:07:55.976 rmmod nvme_fabrics 00:07:55.976 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:55.976 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:07:55.976 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:07:55.976 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3668038 ']' 00:07:55.976 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3668038 00:07:55.976 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3668038 ']' 00:07:55.976 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3668038 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3668038 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3668038' 00:07:56.234 killing process with pid 3668038 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3668038 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3668038 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:56.234 00:07:56.234 real 0m6.416s 00:07:56.234 user 0m2.394s 00:07:56.234 sys 0m4.540s 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.234 ************************************ 00:07:56.234 END TEST nvmf_zcopy 00:07:56.234 ************************************ 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.234 16:20:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.493 ************************************ 00:07:56.493 START TEST nvmf_nmic 00:07:56.493 ************************************ 00:07:56.493 16:20:50 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:07:56.493 * Looking for test storage... 00:07:56.493 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:56.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.493 --rc genhtml_branch_coverage=1 00:07:56.493 --rc genhtml_function_coverage=1 00:07:56.493 --rc genhtml_legend=1 00:07:56.493 --rc geninfo_all_blocks=1 00:07:56.493 --rc geninfo_unexecuted_blocks=1 00:07:56.493 00:07:56.493 ' 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:56.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.493 --rc genhtml_branch_coverage=1 00:07:56.493 --rc genhtml_function_coverage=1 00:07:56.493 --rc genhtml_legend=1 00:07:56.493 --rc geninfo_all_blocks=1 00:07:56.493 --rc geninfo_unexecuted_blocks=1 00:07:56.493 00:07:56.493 ' 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:56.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.493 --rc genhtml_branch_coverage=1 00:07:56.493 --rc genhtml_function_coverage=1 00:07:56.493 --rc genhtml_legend=1 00:07:56.493 --rc geninfo_all_blocks=1 00:07:56.493 --rc geninfo_unexecuted_blocks=1 00:07:56.493 00:07:56.493 ' 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:56.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.493 --rc genhtml_branch_coverage=1 00:07:56.493 --rc genhtml_function_coverage=1 00:07:56.493 --rc genhtml_legend=1 00:07:56.493 --rc geninfo_all_blocks=1 00:07:56.493 --rc geninfo_unexecuted_blocks=1 00:07:56.493 00:07:56.493 ' 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:56.493 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.494 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:07:56.494 16:20:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:03.055 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:03.055 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.055 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:03.056 Found net devices under 0000:18:00.0: mlx_0_0 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:03.056 Found net devices under 0000:18:00.1: mlx_0_1 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:03.056 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:03.056 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:03.056 altname enp24s0f0np0 00:08:03.056 altname ens785f0np0 00:08:03.056 inet 192.168.100.8/24 scope global mlx_0_0 00:08:03.056 valid_lft forever preferred_lft forever 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:03.056 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:03.056 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:03.056 altname enp24s0f1np1 00:08:03.056 altname ens785f1np1 00:08:03.056 inet 192.168.100.9/24 scope global mlx_0_1 00:08:03.056 valid_lft forever preferred_lft forever 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:03.056 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:03.057 192.168.100.9' 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:03.057 192.168.100.9' 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:03.057 192.168.100.9' 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3671316 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3671316 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3671316 ']' 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.057 [2024-12-06 16:20:56.761741] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:08:03.057 [2024-12-06 16:20:56.761788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.057 [2024-12-06 16:20:56.820611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.057 [2024-12-06 16:20:56.862039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.057 [2024-12-06 16:20:56.862075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.057 [2024-12-06 16:20:56.862081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.057 [2024-12-06 16:20:56.862087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.057 [2024-12-06 16:20:56.862092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.057 [2024-12-06 16:20:56.863478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.057 [2024-12-06 16:20:56.863585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.057 [2024-12-06 16:20:56.863659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.057 [2024-12-06 16:20:56.863660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.057 16:20:56 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 [2024-12-06 16:20:57.019424] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdce0c0/0xdd25b0) succeed. 00:08:03.057 [2024-12-06 16:20:57.027652] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdcf750/0xe13c50) succeed. 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 Malloc0 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 [2024-12-06 16:20:57.201543] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:03.057 test case1: single bdev can't be used in multiple subsystems 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.057 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.058 [2024-12-06 16:20:57.225325] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:03.058 [2024-12-06 16:20:57.225343] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:03.058 [2024-12-06 16:20:57.225349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:03.058 request: 00:08:03.058 { 00:08:03.058 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:03.058 "namespace": { 00:08:03.058 "bdev_name": "Malloc0", 00:08:03.058 "no_auto_visible": false, 00:08:03.058 "hide_metadata": false 00:08:03.058 }, 00:08:03.058 "method": "nvmf_subsystem_add_ns", 00:08:03.058 "req_id": 1 00:08:03.058 } 00:08:03.058 Got JSON-RPC error response 00:08:03.058 response: 00:08:03.058 { 00:08:03.058 "code": -32602, 00:08:03.058 "message": "Invalid parameters" 00:08:03.058 } 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:03.058 Adding namespace failed - expected result. 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:03.058 test case2: host connect to nvmf target in multiple paths 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:03.058 [2024-12-06 16:20:57.237361] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.058 16:20:57 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:03.626 16:20:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:08:04.562 16:20:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:04.562 16:20:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:04.562 16:20:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:04.562 16:20:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:04.562 16:20:59 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:07.093 16:21:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:07.093 16:21:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:07.093 16:21:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:07.094 16:21:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:07.094 16:21:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:07.094 16:21:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:07.094 16:21:01 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:07.094 [global] 00:08:07.094 thread=1 00:08:07.094 invalidate=1 00:08:07.094 rw=write 00:08:07.094 time_based=1 00:08:07.094 runtime=1 00:08:07.094 ioengine=libaio 00:08:07.094 direct=1 00:08:07.094 bs=4096 00:08:07.094 iodepth=1 00:08:07.094 norandommap=0 00:08:07.094 numjobs=1 00:08:07.094 00:08:07.094 verify_dump=1 00:08:07.094 verify_backlog=512 00:08:07.094 verify_state_save=0 00:08:07.094 do_verify=1 00:08:07.094 verify=crc32c-intel 00:08:07.094 [job0] 00:08:07.094 filename=/dev/nvme0n1 00:08:07.094 Could not set queue depth (nvme0n1) 00:08:07.094 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:07.094 fio-3.35 00:08:07.094 Starting 1 thread 00:08:08.031 00:08:08.031 job0: (groupid=0, jobs=1): err= 0: pid=3672601: Fri Dec 6 16:21:02 2024 00:08:08.031 read: IOPS=7168, BW=28.0MiB/s (29.4MB/s)(28.0MiB/1001msec) 00:08:08.031 slat (nsec): min=6022, max=34339, avg=7166.58, stdev=802.37 00:08:08.031 clat (usec): min=40, max=338, avg=58.49, stdev=14.56 00:08:08.031 lat (usec): min=54, max=346, avg=65.65, stdev=14.59 00:08:08.031 clat percentiles (usec): 00:08:08.031 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:08:08.031 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:08:08.031 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 64], 00:08:08.031 | 99.00th=[ 133], 99.50th=[ 184], 99.90th=[ 249], 99.95th=[ 289], 00:08:08.031 | 99.99th=[ 338] 00:08:08.031 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 00:08:08.031 slat (nsec): min=8504, max=42049, avg=9327.09, stdev=901.67 00:08:08.031 clat (usec): min=42, max=316, avg=55.85, stdev=14.19 00:08:08.031 lat (usec): min=52, max=325, avg=65.17, stdev=14.24 00:08:08.031 clat percentiles (usec): 00:08:08.031 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:08:08.031 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 55], 60.00th=[ 56], 00:08:08.031 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 60], 95.00th=[ 61], 00:08:08.031 | 99.00th=[ 119], 99.50th=[ 182], 99.90th=[ 227], 99.95th=[ 285], 00:08:08.031 | 99.99th=[ 318] 00:08:08.031 bw ( KiB/s): min=30672, max=30672, per=99.94%, avg=30672.00, stdev= 0.00, samples=1 00:08:08.031 iops : min= 7668, max= 7668, avg=7668.00, stdev= 0.00, samples=1 00:08:08.031 lat (usec) : 50=6.05%, 100=92.74%, 250=1.12%, 500=0.08% 00:08:08.031 cpu : usr=6.60%, sys=12.50%, ctx=14856, majf=0, minf=1 00:08:08.031 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:08.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:08.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:08.031 issued rwts: total=7176,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:08.031 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:08.031 00:08:08.031 Run status group 0 (all jobs): 00:08:08.031 READ: bw=28.0MiB/s (29.4MB/s), 28.0MiB/s-28.0MiB/s (29.4MB/s-29.4MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:08:08.031 WRITE: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:08:08.031 00:08:08.031 Disk stats (read/write): 00:08:08.031 nvme0n1: ios=6706/6658, merge=0/0, ticks=376/338, in_queue=714, util=90.78% 00:08:08.031 16:21:02 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.937 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:10.196 rmmod nvme_rdma 00:08:10.196 rmmod nvme_fabrics 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3671316 ']' 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3671316 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3671316 ']' 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3671316 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3671316 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3671316' 00:08:10.196 killing process with pid 3671316 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3671316 00:08:10.196 16:21:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3671316 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:10.455 00:08:10.455 real 0m14.046s 00:08:10.455 user 0m41.588s 00:08:10.455 sys 0m5.042s 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:10.455 ************************************ 00:08:10.455 END TEST nvmf_nmic 00:08:10.455 ************************************ 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:10.455 ************************************ 00:08:10.455 START TEST nvmf_fio_target 00:08:10.455 ************************************ 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:08:10.455 * Looking for test storage... 00:08:10.455 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:10.455 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:10.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.714 --rc genhtml_branch_coverage=1 00:08:10.714 --rc genhtml_function_coverage=1 00:08:10.714 --rc genhtml_legend=1 00:08:10.714 --rc geninfo_all_blocks=1 00:08:10.714 --rc geninfo_unexecuted_blocks=1 00:08:10.714 00:08:10.714 ' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:10.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.714 --rc genhtml_branch_coverage=1 00:08:10.714 --rc genhtml_function_coverage=1 00:08:10.714 --rc genhtml_legend=1 00:08:10.714 --rc geninfo_all_blocks=1 00:08:10.714 --rc geninfo_unexecuted_blocks=1 00:08:10.714 00:08:10.714 ' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:10.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.714 --rc genhtml_branch_coverage=1 00:08:10.714 --rc genhtml_function_coverage=1 00:08:10.714 --rc genhtml_legend=1 00:08:10.714 --rc geninfo_all_blocks=1 00:08:10.714 --rc geninfo_unexecuted_blocks=1 00:08:10.714 00:08:10.714 ' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:10.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.714 --rc genhtml_branch_coverage=1 00:08:10.714 --rc genhtml_function_coverage=1 00:08:10.714 --rc genhtml_legend=1 00:08:10.714 --rc geninfo_all_blocks=1 00:08:10.714 --rc geninfo_unexecuted_blocks=1 00:08:10.714 00:08:10.714 ' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:10.714 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:10.714 16:21:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:17.281 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.281 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:17.282 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:17.282 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:17.282 Found net devices under 0000:18:00.0: mlx_0_0 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:17.282 Found net devices under 0000:18:00.1: mlx_0_1 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:17.282 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:17.282 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:17.282 altname enp24s0f0np0 00:08:17.282 altname ens785f0np0 00:08:17.282 inet 192.168.100.8/24 scope global mlx_0_0 00:08:17.282 valid_lft forever preferred_lft forever 00:08:17.282 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:17.283 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:17.283 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:17.283 altname enp24s0f1np1 00:08:17.283 altname ens785f1np1 00:08:17.283 inet 192.168.100.9/24 scope global mlx_0_1 00:08:17.283 valid_lft forever preferred_lft forever 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:17.283 192.168.100.9' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:17.283 192.168.100.9' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:17.283 192.168.100.9' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:17.283 16:21:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3676957 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3676957 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3676957 ']' 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:17.283 [2024-12-06 16:21:11.078878] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:08:17.283 [2024-12-06 16:21:11.078923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.283 [2024-12-06 16:21:11.138148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.283 [2024-12-06 16:21:11.177011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.283 [2024-12-06 16:21:11.177049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.283 [2024-12-06 16:21:11.177056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.283 [2024-12-06 16:21:11.177061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.283 [2024-12-06 16:21:11.177066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.283 [2024-12-06 16:21:11.178492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.283 [2024-12-06 16:21:11.178509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.283 [2024-12-06 16:21:11.178594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.283 [2024-12-06 16:21:11.178596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:17.283 [2024-12-06 16:21:11.495139] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8a30c0/0x8a75b0) succeed. 00:08:17.283 [2024-12-06 16:21:11.503281] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8a4750/0x8e8c50) succeed. 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:17.283 16:21:11 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.543 16:21:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:17.543 16:21:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.543 16:21:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:17.543 16:21:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:17.802 16:21:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:17.802 16:21:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:18.061 16:21:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:18.320 16:21:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:18.320 16:21:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:18.320 16:21:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:18.320 16:21:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:18.579 16:21:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:18.579 16:21:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:18.839 16:21:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.098 16:21:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:19.098 16:21:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.098 16:21:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:19.098 16:21:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:19.357 16:21:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:19.616 [2024-12-06 16:21:14.086751] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:19.616 16:21:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:19.616 16:21:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:19.875 16:21:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:20.812 16:21:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:20.813 16:21:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:20.813 16:21:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:20.813 16:21:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:20.813 16:21:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:20.813 16:21:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:23.344 16:21:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:23.344 16:21:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:23.344 16:21:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.344 16:21:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:23.344 16:21:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.344 16:21:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:23.344 16:21:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:23.344 [global] 00:08:23.344 thread=1 00:08:23.344 invalidate=1 00:08:23.344 rw=write 00:08:23.344 time_based=1 00:08:23.344 runtime=1 00:08:23.344 ioengine=libaio 00:08:23.344 direct=1 00:08:23.344 bs=4096 00:08:23.344 iodepth=1 00:08:23.344 norandommap=0 00:08:23.344 numjobs=1 00:08:23.344 00:08:23.344 verify_dump=1 00:08:23.344 verify_backlog=512 00:08:23.344 verify_state_save=0 00:08:23.344 do_verify=1 00:08:23.344 verify=crc32c-intel 00:08:23.344 [job0] 00:08:23.344 filename=/dev/nvme0n1 00:08:23.344 [job1] 00:08:23.344 filename=/dev/nvme0n2 00:08:23.344 [job2] 00:08:23.344 filename=/dev/nvme0n3 00:08:23.344 [job3] 00:08:23.344 filename=/dev/nvme0n4 00:08:23.344 Could not set queue depth (nvme0n1) 00:08:23.344 Could not set queue depth (nvme0n2) 00:08:23.344 Could not set queue depth (nvme0n3) 00:08:23.344 Could not set queue depth (nvme0n4) 00:08:23.344 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:23.344 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:23.344 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:23.344 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:23.344 fio-3.35 00:08:23.344 Starting 4 threads 00:08:24.723 00:08:24.723 job0: (groupid=0, jobs=1): err= 0: pid=3678479: Fri Dec 6 16:21:19 2024 00:08:24.723 read: IOPS=5732, BW=22.4MiB/s (23.5MB/s)(22.4MiB/1001msec) 00:08:24.723 slat (nsec): min=2134, max=25301, avg=5714.60, stdev=1984.65 00:08:24.723 clat (usec): min=55, max=240, avg=78.30, stdev= 8.97 00:08:24.723 lat (usec): min=60, max=248, avg=84.01, stdev= 9.71 00:08:24.723 clat percentiles (usec): 00:08:24.723 | 1.00th=[ 66], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:08:24.723 | 30.00th=[ 75], 40.00th=[ 76], 50.00th=[ 78], 60.00th=[ 80], 00:08:24.723 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 87], 95.00th=[ 90], 00:08:24.723 | 99.00th=[ 98], 99.50th=[ 106], 99.90th=[ 206], 99.95th=[ 227], 00:08:24.724 | 99.99th=[ 241] 00:08:24.724 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:08:24.724 slat (nsec): min=3476, max=32283, avg=7120.06, stdev=2514.91 00:08:24.724 clat (usec): min=54, max=324, avg=74.31, stdev=11.46 00:08:24.724 lat (usec): min=59, max=333, avg=81.43, stdev=12.28 00:08:24.724 clat percentiles (usec): 00:08:24.724 | 1.00th=[ 62], 5.00th=[ 65], 10.00th=[ 67], 20.00th=[ 69], 00:08:24.724 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 75], 00:08:24.724 | 70.00th=[ 77], 80.00th=[ 79], 90.00th=[ 83], 95.00th=[ 86], 00:08:24.724 | 99.00th=[ 96], 99.50th=[ 147], 99.90th=[ 221], 99.95th=[ 225], 00:08:24.724 | 99.99th=[ 326] 00:08:24.724 bw ( KiB/s): min=24576, max=24576, per=31.57%, avg=24576.00, stdev= 0.00, samples=1 00:08:24.724 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:08:24.724 lat (usec) : 100=99.18%, 250=0.80%, 500=0.02% 00:08:24.724 cpu : usr=3.70%, sys=8.00%, ctx=11882, majf=0, minf=1 00:08:24.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:24.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.724 issued rwts: total=5738,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:24.724 job1: (groupid=0, jobs=1): err= 0: pid=3678480: Fri Dec 6 16:21:19 2024 00:08:24.724 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:08:24.724 slat (nsec): min=5704, max=27370, avg=7183.38, stdev=953.88 00:08:24.724 clat (usec): min=65, max=195, avg=129.10, stdev=23.10 00:08:24.724 lat (usec): min=72, max=202, avg=136.29, stdev=23.12 00:08:24.724 clat percentiles (usec): 00:08:24.724 | 1.00th=[ 76], 5.00th=[ 85], 10.00th=[ 91], 20.00th=[ 119], 00:08:24.724 | 30.00th=[ 124], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:08:24.724 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 163], 95.00th=[ 176], 00:08:24.724 | 99.00th=[ 186], 99.50th=[ 188], 99.90th=[ 194], 99.95th=[ 194], 00:08:24.724 | 99.99th=[ 196] 00:08:24.724 write: IOPS=3982, BW=15.6MiB/s (16.3MB/s)(15.6MiB/1001msec); 0 zone resets 00:08:24.724 slat (nsec): min=7979, max=38702, avg=9360.31, stdev=1279.73 00:08:24.724 clat (usec): min=58, max=191, avg=115.30, stdev=25.62 00:08:24.724 lat (usec): min=67, max=201, avg=124.66, stdev=25.60 00:08:24.724 clat percentiles (usec): 00:08:24.724 | 1.00th=[ 67], 5.00th=[ 74], 10.00th=[ 78], 20.00th=[ 85], 00:08:24.724 | 30.00th=[ 111], 40.00th=[ 115], 50.00th=[ 119], 60.00th=[ 122], 00:08:24.724 | 70.00th=[ 125], 80.00th=[ 131], 90.00th=[ 153], 95.00th=[ 161], 00:08:24.724 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 184], 99.95th=[ 186], 00:08:24.724 | 99.99th=[ 192] 00:08:24.724 bw ( KiB/s): min=16384, max=16384, per=21.05%, avg=16384.00, stdev= 0.00, samples=1 00:08:24.724 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:24.724 lat (usec) : 100=18.60%, 250=81.40% 00:08:24.724 cpu : usr=2.60%, sys=7.40%, ctx=7570, majf=0, minf=1 00:08:24.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:24.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.724 issued rwts: total=3584,3986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:24.724 job2: (groupid=0, jobs=1): err= 0: pid=3678481: Fri Dec 6 16:21:19 2024 00:08:24.724 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:08:24.724 slat (nsec): min=6292, max=41602, avg=7394.18, stdev=1069.38 00:08:24.724 clat (usec): min=73, max=188, avg=128.83, stdev=15.42 00:08:24.724 lat (usec): min=80, max=195, avg=136.22, stdev=15.40 00:08:24.724 clat percentiles (usec): 00:08:24.724 | 1.00th=[ 84], 5.00th=[ 97], 10.00th=[ 115], 20.00th=[ 121], 00:08:24.724 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 133], 00:08:24.724 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 157], 00:08:24.724 | 99.00th=[ 172], 99.50th=[ 174], 99.90th=[ 182], 99.95th=[ 186], 00:08:24.724 | 99.99th=[ 190] 00:08:24.724 write: IOPS=3966, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1001msec); 0 zone resets 00:08:24.724 slat (nsec): min=8321, max=35831, avg=9568.83, stdev=1109.80 00:08:24.724 clat (usec): min=64, max=177, avg=115.68, stdev=18.60 00:08:24.724 lat (usec): min=74, max=186, avg=125.25, stdev=18.62 00:08:24.724 clat percentiles (usec): 00:08:24.724 | 1.00th=[ 72], 5.00th=[ 78], 10.00th=[ 84], 20.00th=[ 108], 00:08:24.724 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 121], 00:08:24.724 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 137], 95.00th=[ 149], 00:08:24.724 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 176], 00:08:24.724 | 99.99th=[ 178] 00:08:24.724 bw ( KiB/s): min=16384, max=16384, per=21.05%, avg=16384.00, stdev= 0.00, samples=1 00:08:24.724 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:24.724 lat (usec) : 100=10.82%, 250=89.18% 00:08:24.724 cpu : usr=3.40%, sys=6.80%, ctx=7554, majf=0, minf=1 00:08:24.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:24.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.724 issued rwts: total=3584,3970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:24.724 job3: (groupid=0, jobs=1): err= 0: pid=3678482: Fri Dec 6 16:21:19 2024 00:08:24.724 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:08:24.724 slat (nsec): min=6341, max=19838, avg=7144.77, stdev=641.87 00:08:24.724 clat (usec): min=68, max=277, avg=86.46, stdev= 6.97 00:08:24.724 lat (usec): min=76, max=284, avg=93.60, stdev= 7.01 00:08:24.724 clat percentiles (usec): 00:08:24.724 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 82], 00:08:24.724 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 88], 00:08:24.724 | 70.00th=[ 90], 80.00th=[ 92], 90.00th=[ 95], 95.00th=[ 98], 00:08:24.724 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 114], 99.95th=[ 126], 00:08:24.724 | 99.99th=[ 277] 00:08:24.724 write: IOPS=5376, BW=21.0MiB/s (22.0MB/s)(21.0MiB/1001msec); 0 zone resets 00:08:24.724 slat (nsec): min=8526, max=36962, avg=9490.00, stdev=1064.62 00:08:24.724 clat (usec): min=63, max=191, avg=83.46, stdev=11.74 00:08:24.724 lat (usec): min=73, max=203, avg=92.95, stdev=11.90 00:08:24.724 clat percentiles (usec): 00:08:24.724 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 77], 00:08:24.724 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 83], 00:08:24.724 | 70.00th=[ 85], 80.00th=[ 88], 90.00th=[ 94], 95.00th=[ 111], 00:08:24.724 | 99.00th=[ 131], 99.50th=[ 141], 99.90th=[ 157], 99.95th=[ 174], 00:08:24.724 | 99.99th=[ 192] 00:08:24.724 bw ( KiB/s): min=21488, max=21488, per=27.60%, avg=21488.00, stdev= 0.00, samples=1 00:08:24.724 iops : min= 5372, max= 5372, avg=5372.00, stdev= 0.00, samples=1 00:08:24.724 lat (usec) : 100=95.00%, 250=4.99%, 500=0.01% 00:08:24.724 cpu : usr=5.40%, sys=8.50%, ctx=10502, majf=0, minf=1 00:08:24.724 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:24.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.724 issued rwts: total=5120,5382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.724 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:24.724 00:08:24.724 Run status group 0 (all jobs): 00:08:24.724 READ: bw=70.3MiB/s (73.8MB/s), 14.0MiB/s-22.4MiB/s (14.7MB/s-23.5MB/s), io=70.4MiB (73.8MB), run=1001-1001msec 00:08:24.724 WRITE: bw=76.0MiB/s (79.7MB/s), 15.5MiB/s-24.0MiB/s (16.2MB/s-25.1MB/s), io=76.1MiB (79.8MB), run=1001-1001msec 00:08:24.724 00:08:24.724 Disk stats (read/write): 00:08:24.724 nvme0n1: ios=5130/5120, merge=0/0, ticks=399/370, in_queue=769, util=86.77% 00:08:24.724 nvme0n2: ios=3072/3424, merge=0/0, ticks=376/387, in_queue=763, util=87.27% 00:08:24.724 nvme0n3: ios=3072/3406, merge=0/0, ticks=382/365, in_queue=747, util=89.09% 00:08:24.724 nvme0n4: ios=4384/4608, merge=0/0, ticks=362/376, in_queue=738, util=89.64% 00:08:24.724 16:21:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:24.724 [global] 00:08:24.724 thread=1 00:08:24.724 invalidate=1 00:08:24.724 rw=randwrite 00:08:24.724 time_based=1 00:08:24.724 runtime=1 00:08:24.724 ioengine=libaio 00:08:24.724 direct=1 00:08:24.724 bs=4096 00:08:24.724 iodepth=1 00:08:24.724 norandommap=0 00:08:24.724 numjobs=1 00:08:24.724 00:08:24.724 verify_dump=1 00:08:24.724 verify_backlog=512 00:08:24.724 verify_state_save=0 00:08:24.724 do_verify=1 00:08:24.724 verify=crc32c-intel 00:08:24.724 [job0] 00:08:24.724 filename=/dev/nvme0n1 00:08:24.724 [job1] 00:08:24.724 filename=/dev/nvme0n2 00:08:24.724 [job2] 00:08:24.724 filename=/dev/nvme0n3 00:08:24.724 [job3] 00:08:24.724 filename=/dev/nvme0n4 00:08:24.724 Could not set queue depth (nvme0n1) 00:08:24.724 Could not set queue depth (nvme0n2) 00:08:24.724 Could not set queue depth (nvme0n3) 00:08:24.724 Could not set queue depth (nvme0n4) 00:08:24.724 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.724 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.724 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.724 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:24.724 fio-3.35 00:08:24.724 Starting 4 threads 00:08:26.103 00:08:26.103 job0: (groupid=0, jobs=1): err= 0: pid=3678906: Fri Dec 6 16:21:20 2024 00:08:26.103 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:08:26.103 slat (nsec): min=6267, max=18366, avg=7545.64, stdev=1029.68 00:08:26.103 clat (usec): min=65, max=284, avg=127.36, stdev=19.89 00:08:26.103 lat (usec): min=74, max=291, avg=134.91, stdev=20.05 00:08:26.103 clat percentiles (usec): 00:08:26.103 | 1.00th=[ 74], 5.00th=[ 98], 10.00th=[ 109], 20.00th=[ 115], 00:08:26.103 | 30.00th=[ 118], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 130], 00:08:26.103 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:08:26.103 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 210], 99.95th=[ 241], 00:08:26.103 | 99.99th=[ 285] 00:08:26.103 write: IOPS=3886, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1001msec); 0 zone resets 00:08:26.103 slat (nsec): min=7893, max=44005, avg=9410.02, stdev=1167.47 00:08:26.103 clat (usec): min=64, max=363, avg=119.54, stdev=18.31 00:08:26.103 lat (usec): min=73, max=372, avg=128.95, stdev=18.49 00:08:26.103 clat percentiles (usec): 00:08:26.103 | 1.00th=[ 77], 5.00th=[ 95], 10.00th=[ 100], 20.00th=[ 106], 00:08:26.103 | 30.00th=[ 110], 40.00th=[ 114], 50.00th=[ 117], 60.00th=[ 123], 00:08:26.103 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:08:26.103 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 192], 99.95th=[ 194], 00:08:26.103 | 99.99th=[ 363] 00:08:26.103 bw ( KiB/s): min=15440, max=15440, per=20.88%, avg=15440.00, stdev= 0.00, samples=1 00:08:26.103 iops : min= 3860, max= 3860, avg=3860.00, stdev= 0.00, samples=1 00:08:26.103 lat (usec) : 100=7.80%, 250=92.17%, 500=0.03% 00:08:26.103 cpu : usr=2.50%, sys=7.50%, ctx=7475, majf=0, minf=1 00:08:26.103 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:26.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.103 issued rwts: total=3584,3890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:26.103 job1: (groupid=0, jobs=1): err= 0: pid=3678907: Fri Dec 6 16:21:20 2024 00:08:26.103 read: IOPS=5834, BW=22.8MiB/s (23.9MB/s)(22.8MiB/1001msec) 00:08:26.103 slat (nsec): min=6383, max=27907, avg=7128.01, stdev=804.98 00:08:26.103 clat (usec): min=61, max=173, avg=75.14, stdev= 5.40 00:08:26.103 lat (usec): min=68, max=180, avg=82.26, stdev= 5.45 00:08:26.103 clat percentiles (usec): 00:08:26.103 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 00:08:26.103 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 77], 00:08:26.103 | 70.00th=[ 78], 80.00th=[ 79], 90.00th=[ 82], 95.00th=[ 84], 00:08:26.103 | 99.00th=[ 90], 99.50th=[ 93], 99.90th=[ 118], 99.95th=[ 141], 00:08:26.103 | 99.99th=[ 174] 00:08:26.103 write: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec); 0 zone resets 00:08:26.103 slat (nsec): min=8265, max=40948, avg=9043.55, stdev=937.39 00:08:26.103 clat (usec): min=58, max=264, avg=71.79, stdev= 8.42 00:08:26.103 lat (usec): min=67, max=273, avg=80.84, stdev= 8.49 00:08:26.103 clat percentiles (usec): 00:08:26.103 | 1.00th=[ 62], 5.00th=[ 65], 10.00th=[ 66], 20.00th=[ 68], 00:08:26.103 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:08:26.103 | 70.00th=[ 74], 80.00th=[ 76], 90.00th=[ 78], 95.00th=[ 81], 00:08:26.103 | 99.00th=[ 113], 99.50th=[ 125], 99.90th=[ 159], 99.95th=[ 184], 00:08:26.103 | 99.99th=[ 265] 00:08:26.103 bw ( KiB/s): min=24576, max=24576, per=33.24%, avg=24576.00, stdev= 0.00, samples=1 00:08:26.103 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:08:26.103 lat (usec) : 100=99.34%, 250=0.65%, 500=0.01% 00:08:26.103 cpu : usr=5.20%, sys=10.20%, ctx=11984, majf=0, minf=2 00:08:26.103 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:26.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.103 issued rwts: total=5840,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.103 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:26.103 job2: (groupid=0, jobs=1): err= 0: pid=3678909: Fri Dec 6 16:21:20 2024 00:08:26.103 read: IOPS=4483, BW=17.5MiB/s (18.4MB/s)(17.5MiB/1001msec) 00:08:26.103 slat (nsec): min=6306, max=28334, avg=7547.57, stdev=914.98 00:08:26.104 clat (usec): min=62, max=209, avg=102.57, stdev=29.54 00:08:26.104 lat (usec): min=74, max=217, avg=110.11, stdev=29.83 00:08:26.104 clat percentiles (usec): 00:08:26.104 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 80], 00:08:26.104 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 93], 00:08:26.104 | 70.00th=[ 110], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 153], 00:08:26.104 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 198], 99.95th=[ 200], 00:08:26.104 | 99.99th=[ 210] 00:08:26.104 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:08:26.104 slat (nsec): min=8170, max=39885, avg=9363.01, stdev=1108.98 00:08:26.104 clat (usec): min=64, max=210, avg=96.61, stdev=28.64 00:08:26.104 lat (usec): min=73, max=219, avg=105.97, stdev=28.93 00:08:26.104 clat percentiles (usec): 00:08:26.104 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:08:26.104 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 86], 00:08:26.104 | 70.00th=[ 124], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:08:26.104 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 194], 99.95th=[ 196], 00:08:26.104 | 99.99th=[ 210] 00:08:26.104 bw ( KiB/s): min=16384, max=16384, per=22.16%, avg=16384.00, stdev= 0.00, samples=1 00:08:26.104 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:26.104 lat (usec) : 100=67.40%, 250=32.60% 00:08:26.104 cpu : usr=4.60%, sys=7.40%, ctx=9097, majf=0, minf=1 00:08:26.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:26.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.104 issued rwts: total=4488,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:26.104 job3: (groupid=0, jobs=1): err= 0: pid=3678910: Fri Dec 6 16:21:20 2024 00:08:26.104 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:08:26.104 slat (nsec): min=6432, max=28520, avg=7672.66, stdev=1049.27 00:08:26.104 clat (usec): min=72, max=311, avg=128.66, stdev=18.07 00:08:26.104 lat (usec): min=79, max=318, avg=136.34, stdev=18.27 00:08:26.104 clat percentiles (usec): 00:08:26.104 | 1.00th=[ 89], 5.00th=[ 106], 10.00th=[ 111], 20.00th=[ 115], 00:08:26.104 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 131], 00:08:26.104 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:08:26.104 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 210], 99.95th=[ 269], 00:08:26.104 | 99.99th=[ 310] 00:08:26.104 write: IOPS=3855, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1001msec); 0 zone resets 00:08:26.104 slat (nsec): min=7997, max=35680, avg=9608.62, stdev=1254.52 00:08:26.104 clat (usec): min=69, max=295, avg=119.17, stdev=17.10 00:08:26.104 lat (usec): min=78, max=310, avg=128.77, stdev=17.27 00:08:26.104 clat percentiles (usec): 00:08:26.104 | 1.00th=[ 81], 5.00th=[ 96], 10.00th=[ 101], 20.00th=[ 106], 00:08:26.104 | 30.00th=[ 110], 40.00th=[ 113], 50.00th=[ 117], 60.00th=[ 123], 00:08:26.104 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 145], 00:08:26.104 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 188], 99.95th=[ 204], 00:08:26.104 | 99.99th=[ 297] 00:08:26.104 bw ( KiB/s): min=15192, max=15192, per=20.55%, avg=15192.00, stdev= 0.00, samples=1 00:08:26.104 iops : min= 3798, max= 3798, avg=3798.00, stdev= 0.00, samples=1 00:08:26.104 lat (usec) : 100=5.84%, 250=94.12%, 500=0.04% 00:08:26.104 cpu : usr=3.50%, sys=6.50%, ctx=7444, majf=0, minf=1 00:08:26.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:26.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:26.104 issued rwts: total=3584,3859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:26.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:26.104 00:08:26.104 Run status group 0 (all jobs): 00:08:26.104 READ: bw=68.3MiB/s (71.6MB/s), 14.0MiB/s-22.8MiB/s (14.7MB/s-23.9MB/s), io=68.3MiB (71.7MB), run=1001-1001msec 00:08:26.104 WRITE: bw=72.2MiB/s (75.7MB/s), 15.1MiB/s-24.0MiB/s (15.8MB/s-25.1MB/s), io=72.3MiB (75.8MB), run=1001-1001msec 00:08:26.104 00:08:26.104 Disk stats (read/write): 00:08:26.104 nvme0n1: ios=3122/3289, merge=0/0, ticks=402/386, in_queue=788, util=87.27% 00:08:26.104 nvme0n2: ios=5120/5234, merge=0/0, ticks=376/350, in_queue=726, util=87.44% 00:08:26.104 nvme0n3: ios=3584/3976, merge=0/0, ticks=375/384, in_queue=759, util=89.15% 00:08:26.104 nvme0n4: ios=3072/3260, merge=0/0, ticks=382/383, in_queue=765, util=89.80% 00:08:26.104 16:21:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:26.104 [global] 00:08:26.104 thread=1 00:08:26.104 invalidate=1 00:08:26.104 rw=write 00:08:26.104 time_based=1 00:08:26.104 runtime=1 00:08:26.104 ioengine=libaio 00:08:26.104 direct=1 00:08:26.104 bs=4096 00:08:26.104 iodepth=128 00:08:26.104 norandommap=0 00:08:26.104 numjobs=1 00:08:26.104 00:08:26.104 verify_dump=1 00:08:26.104 verify_backlog=512 00:08:26.104 verify_state_save=0 00:08:26.104 do_verify=1 00:08:26.104 verify=crc32c-intel 00:08:26.104 [job0] 00:08:26.104 filename=/dev/nvme0n1 00:08:26.104 [job1] 00:08:26.104 filename=/dev/nvme0n2 00:08:26.104 [job2] 00:08:26.104 filename=/dev/nvme0n3 00:08:26.104 [job3] 00:08:26.104 filename=/dev/nvme0n4 00:08:26.104 Could not set queue depth (nvme0n1) 00:08:26.104 Could not set queue depth (nvme0n2) 00:08:26.104 Could not set queue depth (nvme0n3) 00:08:26.104 Could not set queue depth (nvme0n4) 00:08:26.363 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.363 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.363 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.363 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:26.363 fio-3.35 00:08:26.363 Starting 4 threads 00:08:27.775 00:08:27.775 job0: (groupid=0, jobs=1): err= 0: pid=3679324: Fri Dec 6 16:21:22 2024 00:08:27.775 read: IOPS=6004, BW=23.5MiB/s (24.6MB/s)(23.6MiB/1004msec) 00:08:27.775 slat (nsec): min=1277, max=5413.7k, avg=81997.25, stdev=384715.57 00:08:27.775 clat (usec): min=599, max=19969, avg=10774.49, stdev=3868.99 00:08:27.775 lat (usec): min=904, max=19975, avg=10856.49, stdev=3885.57 00:08:27.775 clat percentiles (usec): 00:08:27.775 | 1.00th=[ 2704], 5.00th=[ 4817], 10.00th=[ 5735], 20.00th=[ 7046], 00:08:27.775 | 30.00th=[ 8160], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[11863], 00:08:27.775 | 70.00th=[12911], 80.00th=[14222], 90.00th=[16450], 95.00th=[17433], 00:08:27.775 | 99.00th=[18482], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:08:27.775 | 99.99th=[20055] 00:08:27.775 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:08:27.775 slat (nsec): min=1838, max=5547.5k, avg=73934.57, stdev=339195.26 00:08:27.775 clat (usec): min=513, max=18530, avg=10117.93, stdev=3981.70 00:08:27.775 lat (usec): min=524, max=18537, avg=10191.86, stdev=4009.48 00:08:27.775 clat percentiles (usec): 00:08:27.775 | 1.00th=[ 3163], 5.00th=[ 4555], 10.00th=[ 5211], 20.00th=[ 6128], 00:08:27.775 | 30.00th=[ 6915], 40.00th=[ 8356], 50.00th=[10028], 60.00th=[11338], 00:08:27.775 | 70.00th=[12911], 80.00th=[14091], 90.00th=[15795], 95.00th=[16712], 00:08:27.775 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:08:27.775 | 99.99th=[18482] 00:08:27.775 bw ( KiB/s): min=23976, max=25176, per=22.44%, avg=24576.00, stdev=848.53, samples=2 00:08:27.775 iops : min= 5994, max= 6294, avg=6144.00, stdev=212.13, samples=2 00:08:27.775 lat (usec) : 750=0.03%, 1000=0.04% 00:08:27.775 lat (msec) : 2=0.21%, 4=2.57%, 10=43.24%, 20=53.91% 00:08:27.775 cpu : usr=3.69%, sys=4.29%, ctx=1541, majf=0, minf=1 00:08:27.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:27.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.775 issued rwts: total=6029,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.775 job1: (groupid=0, jobs=1): err= 0: pid=3679325: Fri Dec 6 16:21:22 2024 00:08:27.775 read: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(32.0MiB/1004msec) 00:08:27.775 slat (nsec): min=1274, max=5223.9k, avg=60636.30, stdev=279910.95 00:08:27.775 clat (usec): min=2842, max=17064, avg=7852.74, stdev=2778.87 00:08:27.775 lat (usec): min=3060, max=18434, avg=7913.38, stdev=2794.82 00:08:27.775 clat percentiles (usec): 00:08:27.775 | 1.00th=[ 3982], 5.00th=[ 4686], 10.00th=[ 5080], 20.00th=[ 5604], 00:08:27.775 | 30.00th=[ 6194], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7504], 00:08:27.775 | 70.00th=[ 8586], 80.00th=[10028], 90.00th=[11863], 95.00th=[13304], 00:08:27.775 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:08:27.775 | 99.99th=[17171] 00:08:27.775 write: IOPS=8547, BW=33.4MiB/s (35.0MB/s)(33.5MiB/1004msec); 0 zone resets 00:08:27.775 slat (nsec): min=1766, max=4066.1k, avg=55544.49, stdev=239388.65 00:08:27.775 clat (usec): min=2526, max=20162, avg=7312.59, stdev=3188.00 00:08:27.775 lat (usec): min=2604, max=20673, avg=7368.14, stdev=3205.10 00:08:27.775 clat percentiles (usec): 00:08:27.775 | 1.00th=[ 3752], 5.00th=[ 4424], 10.00th=[ 4686], 20.00th=[ 5145], 00:08:27.775 | 30.00th=[ 5669], 40.00th=[ 5997], 50.00th=[ 6259], 60.00th=[ 6652], 00:08:27.775 | 70.00th=[ 7308], 80.00th=[ 8717], 90.00th=[11338], 95.00th=[16057], 00:08:27.775 | 99.00th=[17695], 99.50th=[19006], 99.90th=[20055], 99.95th=[20055], 00:08:27.775 | 99.99th=[20055] 00:08:27.775 bw ( KiB/s): min=30776, max=36864, per=30.88%, avg=33820.00, stdev=4304.87, samples=2 00:08:27.775 iops : min= 7694, max= 9216, avg=8455.00, stdev=1076.22, samples=2 00:08:27.775 lat (msec) : 4=1.27%, 10=82.11%, 20=16.57%, 50=0.05% 00:08:27.775 cpu : usr=3.29%, sys=5.48%, ctx=1646, majf=0, minf=2 00:08:27.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:27.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.775 issued rwts: total=8192,8582,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.775 job2: (groupid=0, jobs=1): err= 0: pid=3679326: Fri Dec 6 16:21:22 2024 00:08:27.775 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:08:27.775 slat (nsec): min=1325, max=5200.0k, avg=77603.05, stdev=383514.54 00:08:27.775 clat (usec): min=2128, max=20718, avg=10343.34, stdev=3728.85 00:08:27.775 lat (usec): min=2131, max=21428, avg=10420.95, stdev=3750.55 00:08:27.775 clat percentiles (usec): 00:08:27.775 | 1.00th=[ 3621], 5.00th=[ 5276], 10.00th=[ 5932], 20.00th=[ 7046], 00:08:27.775 | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[11076], 00:08:27.775 | 70.00th=[12387], 80.00th=[13829], 90.00th=[15270], 95.00th=[17433], 00:08:27.775 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20841], 99.95th=[20841], 00:08:27.775 | 99.99th=[20841] 00:08:27.775 write: IOPS=6220, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1004msec); 0 zone resets 00:08:27.775 slat (nsec): min=1897, max=5067.6k, avg=80513.34, stdev=347454.18 00:08:27.775 clat (usec): min=2854, max=20687, avg=10144.44, stdev=3810.49 00:08:27.775 lat (usec): min=3098, max=20693, avg=10224.96, stdev=3832.15 00:08:27.775 clat percentiles (usec): 00:08:27.775 | 1.00th=[ 4424], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 6915], 00:08:27.775 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8848], 60.00th=[10159], 00:08:27.775 | 70.00th=[12125], 80.00th=[14484], 90.00th=[16057], 95.00th=[16909], 00:08:27.775 | 99.00th=[19268], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055], 00:08:27.775 | 99.99th=[20579] 00:08:27.775 bw ( KiB/s): min=23656, max=25496, per=22.44%, avg=24576.00, stdev=1301.08, samples=2 00:08:27.775 iops : min= 5914, max= 6374, avg=6144.00, stdev=325.27, samples=2 00:08:27.775 lat (msec) : 4=1.17%, 10=54.09%, 20=44.23%, 50=0.51% 00:08:27.775 cpu : usr=2.39%, sys=4.69%, ctx=1333, majf=0, minf=1 00:08:27.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:27.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.775 issued rwts: total=6144,6245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.775 job3: (groupid=0, jobs=1): err= 0: pid=3679327: Fri Dec 6 16:21:22 2024 00:08:27.775 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:08:27.775 slat (nsec): min=1324, max=5488.5k, avg=80628.84, stdev=398108.35 00:08:27.775 clat (usec): min=3722, max=19217, avg=10226.50, stdev=3326.02 00:08:27.775 lat (usec): min=3835, max=19220, avg=10307.13, stdev=3343.65 00:08:27.775 clat percentiles (usec): 00:08:27.775 | 1.00th=[ 4490], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 6456], 00:08:27.775 | 30.00th=[ 7898], 40.00th=[ 9241], 50.00th=[10421], 60.00th=[11338], 00:08:27.775 | 70.00th=[12387], 80.00th=[13435], 90.00th=[14484], 95.00th=[15664], 00:08:27.775 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19268], 99.95th=[19268], 00:08:27.775 | 99.99th=[19268] 00:08:27.775 write: IOPS=6503, BW=25.4MiB/s (26.6MB/s)(25.5MiB/1003msec); 0 zone resets 00:08:27.775 slat (nsec): min=1869, max=4236.1k, avg=74333.31, stdev=333765.96 00:08:27.775 clat (usec): min=985, max=19843, avg=9810.52, stdev=3494.46 00:08:27.775 lat (usec): min=2568, max=19846, avg=9884.85, stdev=3513.37 00:08:27.775 clat percentiles (usec): 00:08:27.775 | 1.00th=[ 4047], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6456], 00:08:27.775 | 30.00th=[ 7308], 40.00th=[ 8225], 50.00th=[ 9241], 60.00th=[10421], 00:08:27.775 | 70.00th=[11600], 80.00th=[13173], 90.00th=[14877], 95.00th=[16188], 00:08:27.775 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19268], 99.95th=[19792], 00:08:27.775 | 99.99th=[19792] 00:08:27.775 bw ( KiB/s): min=23720, max=27440, per=23.35%, avg=25580.00, stdev=2630.44, samples=2 00:08:27.775 iops : min= 5930, max= 6860, avg=6395.00, stdev=657.61, samples=2 00:08:27.775 lat (usec) : 1000=0.01% 00:08:27.775 lat (msec) : 4=0.54%, 10=50.99%, 20=48.46% 00:08:27.775 cpu : usr=2.00%, sys=5.19%, ctx=1503, majf=0, minf=1 00:08:27.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:27.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.775 issued rwts: total=6144,6523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.776 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.776 00:08:27.776 Run status group 0 (all jobs): 00:08:27.776 READ: bw=103MiB/s (108MB/s), 23.5MiB/s-31.9MiB/s (24.6MB/s-33.4MB/s), io=104MiB (109MB), run=1003-1004msec 00:08:27.776 WRITE: bw=107MiB/s (112MB/s), 23.9MiB/s-33.4MiB/s (25.1MB/s-35.0MB/s), io=107MiB (113MB), run=1003-1004msec 00:08:27.776 00:08:27.776 Disk stats (read/write): 00:08:27.776 nvme0n1: ios=5453/5632, merge=0/0, ticks=20534/18912, in_queue=39446, util=86.67% 00:08:27.776 nvme0n2: ios=7680/7733, merge=0/0, ticks=17108/15288, in_queue=32396, util=87.34% 00:08:27.776 nvme0n3: ios=5488/5632, merge=0/0, ticks=18962/18684, in_queue=37646, util=88.43% 00:08:27.776 nvme0n4: ios=4679/5120, merge=0/0, ticks=19218/17894, in_queue=37112, util=89.19% 00:08:27.776 16:21:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:27.776 [global] 00:08:27.776 thread=1 00:08:27.776 invalidate=1 00:08:27.776 rw=randwrite 00:08:27.776 time_based=1 00:08:27.776 runtime=1 00:08:27.776 ioengine=libaio 00:08:27.776 direct=1 00:08:27.776 bs=4096 00:08:27.776 iodepth=128 00:08:27.776 norandommap=0 00:08:27.776 numjobs=1 00:08:27.776 00:08:27.776 verify_dump=1 00:08:27.776 verify_backlog=512 00:08:27.776 verify_state_save=0 00:08:27.776 do_verify=1 00:08:27.776 verify=crc32c-intel 00:08:27.776 [job0] 00:08:27.776 filename=/dev/nvme0n1 00:08:27.776 [job1] 00:08:27.776 filename=/dev/nvme0n2 00:08:27.776 [job2] 00:08:27.776 filename=/dev/nvme0n3 00:08:27.776 [job3] 00:08:27.776 filename=/dev/nvme0n4 00:08:27.776 Could not set queue depth (nvme0n1) 00:08:27.776 Could not set queue depth (nvme0n2) 00:08:27.776 Could not set queue depth (nvme0n3) 00:08:27.776 Could not set queue depth (nvme0n4) 00:08:28.035 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:28.035 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:28.035 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:28.035 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:28.035 fio-3.35 00:08:28.035 Starting 4 threads 00:08:29.405 00:08:29.405 job0: (groupid=0, jobs=1): err= 0: pid=3679750: Fri Dec 6 16:21:23 2024 00:08:29.405 read: IOPS=6236, BW=24.4MiB/s (25.5MB/s)(24.4MiB/1003msec) 00:08:29.405 slat (nsec): min=1277, max=5262.1k, avg=75695.24, stdev=336978.52 00:08:29.405 clat (usec): min=2331, max=19404, avg=9683.32, stdev=3459.85 00:08:29.405 lat (usec): min=3116, max=20973, avg=9759.01, stdev=3482.19 00:08:29.405 clat percentiles (usec): 00:08:29.405 | 1.00th=[ 3884], 5.00th=[ 5014], 10.00th=[ 5800], 20.00th=[ 6915], 00:08:29.405 | 30.00th=[ 7439], 40.00th=[ 8225], 50.00th=[ 8979], 60.00th=[ 9896], 00:08:29.405 | 70.00th=[10945], 80.00th=[12518], 90.00th=[15008], 95.00th=[16188], 00:08:29.405 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:08:29.405 | 99.99th=[19530] 00:08:29.405 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:08:29.405 slat (nsec): min=1760, max=5763.7k, avg=75428.73, stdev=343214.04 00:08:29.405 clat (usec): min=3100, max=19009, avg=9977.24, stdev=3647.02 00:08:29.405 lat (usec): min=3103, max=20817, avg=10052.67, stdev=3665.04 00:08:29.405 clat percentiles (usec): 00:08:29.405 | 1.00th=[ 4015], 5.00th=[ 4883], 10.00th=[ 5800], 20.00th=[ 6587], 00:08:29.405 | 30.00th=[ 7308], 40.00th=[ 8160], 50.00th=[ 9634], 60.00th=[10814], 00:08:29.405 | 70.00th=[12125], 80.00th=[13698], 90.00th=[14877], 95.00th=[16450], 00:08:29.405 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:08:29.405 | 99.99th=[19006] 00:08:29.405 bw ( KiB/s): min=24448, max=28672, per=24.50%, avg=26560.00, stdev=2986.82, samples=2 00:08:29.405 iops : min= 6112, max= 7168, avg=6640.00, stdev=746.70, samples=2 00:08:29.405 lat (msec) : 4=1.53%, 10=55.58%, 20=42.89% 00:08:29.405 cpu : usr=3.29%, sys=4.99%, ctx=1709, majf=0, minf=1 00:08:29.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:08:29.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:29.405 issued rwts: total=6255,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:29.405 job1: (groupid=0, jobs=1): err= 0: pid=3679751: Fri Dec 6 16:21:23 2024 00:08:29.405 read: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec) 00:08:29.405 slat (nsec): min=1286, max=4876.0k, avg=62501.21, stdev=290721.03 00:08:29.405 clat (usec): min=2498, max=19747, avg=8424.61, stdev=3102.64 00:08:29.405 lat (usec): min=2500, max=19953, avg=8487.11, stdev=3119.30 00:08:29.405 clat percentiles (usec): 00:08:29.405 | 1.00th=[ 4047], 5.00th=[ 4686], 10.00th=[ 5080], 20.00th=[ 5800], 00:08:29.405 | 30.00th=[ 6390], 40.00th=[ 6915], 50.00th=[ 7767], 60.00th=[ 8455], 00:08:29.405 | 70.00th=[ 9503], 80.00th=[10814], 90.00th=[13698], 95.00th=[15401], 00:08:29.405 | 99.00th=[16057], 99.50th=[16450], 99.90th=[17957], 99.95th=[19792], 00:08:29.405 | 99.99th=[19792] 00:08:29.405 write: IOPS=7895, BW=30.8MiB/s (32.3MB/s)(30.9MiB/1001msec); 0 zone resets 00:08:29.405 slat (nsec): min=1807, max=5254.7k, avg=61990.50, stdev=287078.78 00:08:29.405 clat (usec): min=472, max=17329, avg=7820.38, stdev=3087.11 00:08:29.405 lat (usec): min=478, max=17336, avg=7882.37, stdev=3103.69 00:08:29.405 clat percentiles (usec): 00:08:29.405 | 1.00th=[ 3294], 5.00th=[ 4293], 10.00th=[ 4621], 20.00th=[ 5276], 00:08:29.405 | 30.00th=[ 5735], 40.00th=[ 6259], 50.00th=[ 6915], 60.00th=[ 7767], 00:08:29.405 | 70.00th=[ 8979], 80.00th=[10290], 90.00th=[12649], 95.00th=[14484], 00:08:29.405 | 99.00th=[16319], 99.50th=[16581], 99.90th=[16712], 99.95th=[17171], 00:08:29.405 | 99.99th=[17433] 00:08:29.405 bw ( KiB/s): min=32584, max=32584, per=30.05%, avg=32584.00, stdev= 0.00, samples=1 00:08:29.405 iops : min= 8146, max= 8146, avg=8146.00, stdev= 0.00, samples=1 00:08:29.405 lat (usec) : 500=0.03%, 750=0.01% 00:08:29.405 lat (msec) : 2=0.21%, 4=1.59%, 10=74.07%, 20=24.10% 00:08:29.405 cpu : usr=3.30%, sys=6.20%, ctx=1538, majf=0, minf=1 00:08:29.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:29.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:29.406 issued rwts: total=7680,7903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:29.406 job2: (groupid=0, jobs=1): err= 0: pid=3679752: Fri Dec 6 16:21:23 2024 00:08:29.406 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:08:29.406 slat (nsec): min=1317, max=4437.2k, avg=70499.16, stdev=308919.94 00:08:29.406 clat (usec): min=3645, max=18656, avg=9382.02, stdev=2782.40 00:08:29.406 lat (usec): min=3648, max=18658, avg=9452.52, stdev=2793.56 00:08:29.406 clat percentiles (usec): 00:08:29.406 | 1.00th=[ 4686], 5.00th=[ 5800], 10.00th=[ 6259], 20.00th=[ 6849], 00:08:29.406 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 9896], 00:08:29.406 | 70.00th=[10945], 80.00th=[12256], 90.00th=[13566], 95.00th=[14222], 00:08:29.406 | 99.00th=[15926], 99.50th=[16712], 99.90th=[16909], 99.95th=[18744], 00:08:29.406 | 99.99th=[18744] 00:08:29.406 write: IOPS=6888, BW=26.9MiB/s (28.2MB/s)(27.0MiB/1003msec); 0 zone resets 00:08:29.406 slat (nsec): min=1817, max=4438.1k, avg=73062.36, stdev=310680.22 00:08:29.406 clat (usec): min=1677, max=19409, avg=9327.52, stdev=3320.54 00:08:29.406 lat (usec): min=2862, max=19418, avg=9400.58, stdev=3337.94 00:08:29.406 clat percentiles (usec): 00:08:29.406 | 1.00th=[ 4555], 5.00th=[ 5211], 10.00th=[ 5866], 20.00th=[ 6587], 00:08:29.406 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 8291], 60.00th=[ 9634], 00:08:29.406 | 70.00th=[10945], 80.00th=[12387], 90.00th=[13698], 95.00th=[16319], 00:08:29.406 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:08:29.406 | 99.99th=[19530] 00:08:29.406 bw ( KiB/s): min=25584, max=28672, per=25.02%, avg=27128.00, stdev=2183.55, samples=2 00:08:29.406 iops : min= 6396, max= 7168, avg=6782.00, stdev=545.89, samples=2 00:08:29.406 lat (msec) : 2=0.01%, 4=0.22%, 10=61.68%, 20=38.09% 00:08:29.406 cpu : usr=3.89%, sys=4.59%, ctx=1472, majf=0, minf=1 00:08:29.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:08:29.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:29.406 issued rwts: total=6656,6909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:29.406 job3: (groupid=0, jobs=1): err= 0: pid=3679754: Fri Dec 6 16:21:23 2024 00:08:29.406 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:08:29.406 slat (nsec): min=1306, max=4770.9k, avg=87511.28, stdev=398174.79 00:08:29.406 clat (usec): min=3870, max=18968, avg=11441.02, stdev=2913.49 00:08:29.406 lat (usec): min=3873, max=19800, avg=11528.53, stdev=2924.18 00:08:29.406 clat percentiles (usec): 00:08:29.406 | 1.00th=[ 5211], 5.00th=[ 6718], 10.00th=[ 7898], 20.00th=[ 8848], 00:08:29.406 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11338], 60.00th=[12256], 00:08:29.406 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15270], 95.00th=[16450], 00:08:29.406 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006], 00:08:29.406 | 99.99th=[19006] 00:08:29.406 write: IOPS=5724, BW=22.4MiB/s (23.4MB/s)(22.4MiB/1004msec); 0 zone resets 00:08:29.406 slat (nsec): min=1784, max=4363.1k, avg=84184.08, stdev=348173.47 00:08:29.406 clat (usec): min=2259, max=21333, avg=10828.35, stdev=3625.63 00:08:29.406 lat (usec): min=2275, max=21336, avg=10912.53, stdev=3640.64 00:08:29.406 clat percentiles (usec): 00:08:29.406 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 6259], 20.00th=[ 7373], 00:08:29.406 | 30.00th=[ 8160], 40.00th=[ 9372], 50.00th=[10552], 60.00th=[11994], 00:08:29.406 | 70.00th=[13173], 80.00th=[14222], 90.00th=[15533], 95.00th=[16581], 00:08:29.406 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20579], 99.95th=[21365], 00:08:29.406 | 99.99th=[21365] 00:08:29.406 bw ( KiB/s): min=20480, max=24576, per=20.78%, avg=22528.00, stdev=2896.31, samples=2 00:08:29.406 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:08:29.406 lat (msec) : 4=0.25%, 10=40.14%, 20=59.42%, 50=0.20% 00:08:29.406 cpu : usr=3.09%, sys=4.69%, ctx=1475, majf=0, minf=1 00:08:29.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:29.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:29.406 issued rwts: total=5632,5747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:29.406 00:08:29.406 Run status group 0 (all jobs): 00:08:29.406 READ: bw=102MiB/s (107MB/s), 21.9MiB/s-30.0MiB/s (23.0MB/s-31.4MB/s), io=102MiB (107MB), run=1001-1004msec 00:08:29.406 WRITE: bw=106MiB/s (111MB/s), 22.4MiB/s-30.8MiB/s (23.4MB/s-32.3MB/s), io=106MiB (111MB), run=1001-1004msec 00:08:29.406 00:08:29.406 Disk stats (read/write): 00:08:29.406 nvme0n1: ios=5682/6113, merge=0/0, ticks=14962/16153, in_queue=31115, util=86.77% 00:08:29.406 nvme0n2: ios=6144/6608, merge=0/0, ticks=15070/15949, in_queue=31019, util=87.13% 00:08:29.406 nvme0n3: ios=6068/6144, merge=0/0, ticks=15509/14670, in_queue=30179, util=88.96% 00:08:29.406 nvme0n4: ios=4611/5120, merge=0/0, ticks=15013/15641, in_queue=30654, util=89.41% 00:08:29.406 16:21:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:29.406 16:21:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3680017 00:08:29.406 16:21:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:29.406 16:21:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:29.406 [global] 00:08:29.406 thread=1 00:08:29.406 invalidate=1 00:08:29.406 rw=read 00:08:29.406 time_based=1 00:08:29.406 runtime=10 00:08:29.406 ioengine=libaio 00:08:29.406 direct=1 00:08:29.406 bs=4096 00:08:29.406 iodepth=1 00:08:29.406 norandommap=1 00:08:29.406 numjobs=1 00:08:29.406 00:08:29.406 [job0] 00:08:29.406 filename=/dev/nvme0n1 00:08:29.406 [job1] 00:08:29.406 filename=/dev/nvme0n2 00:08:29.406 [job2] 00:08:29.406 filename=/dev/nvme0n3 00:08:29.406 [job3] 00:08:29.406 filename=/dev/nvme0n4 00:08:29.406 Could not set queue depth (nvme0n1) 00:08:29.406 Could not set queue depth (nvme0n2) 00:08:29.406 Could not set queue depth (nvme0n3) 00:08:29.406 Could not set queue depth (nvme0n4) 00:08:29.406 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.406 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.406 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.406 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.406 fio-3.35 00:08:29.406 Starting 4 threads 00:08:32.744 16:21:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:32.744 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=66572288, buflen=4096 00:08:32.744 fio: pid=3680179, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:32.744 16:21:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:32.744 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=73256960, buflen=4096 00:08:32.744 fio: pid=3680178, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:32.744 16:21:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:32.744 16:21:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:32.744 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=40583168, buflen=4096 00:08:32.744 fio: pid=3680176, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:32.744 16:21:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:32.744 16:21:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:33.001 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=884736, buflen=4096 00:08:33.001 fio: pid=3680177, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:33.001 16:21:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.001 16:21:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:33.001 00:08:33.001 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3680176: Fri Dec 6 16:21:27 2024 00:08:33.001 read: IOPS=8495, BW=33.2MiB/s (34.8MB/s)(103MiB/3095msec) 00:08:33.001 slat (usec): min=5, max=18741, avg= 8.87, stdev=152.46 00:08:33.001 clat (usec): min=44, max=509, avg=106.84, stdev=45.12 00:08:33.001 lat (usec): min=52, max=18846, avg=115.72, stdev=159.03 00:08:33.001 clat percentiles (usec): 00:08:33.001 | 1.00th=[ 59], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 75], 00:08:33.001 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 88], 00:08:33.001 | 70.00th=[ 113], 80.00th=[ 163], 90.00th=[ 174], 95.00th=[ 184], 00:08:33.001 | 99.00th=[ 229], 99.50th=[ 233], 99.90th=[ 245], 99.95th=[ 249], 00:08:33.001 | 99.99th=[ 265] 00:08:33.001 bw ( KiB/s): min=22440, max=46088, per=30.02%, avg=34302.40, stdev=11783.37, samples=5 00:08:33.001 iops : min= 5610, max=11522, avg=8575.60, stdev=2945.84, samples=5 00:08:33.001 lat (usec) : 50=0.12%, 100=66.94%, 250=32.89%, 500=0.04%, 750=0.01% 00:08:33.001 cpu : usr=2.07%, sys=7.18%, ctx=26298, majf=0, minf=1 00:08:33.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.001 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.001 issued rwts: total=26293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.001 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3680177: Fri Dec 6 16:21:27 2024 00:08:33.001 read: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(129MiB/3270msec) 00:08:33.001 slat (usec): min=4, max=12910, avg= 8.71, stdev=133.89 00:08:33.001 clat (usec): min=43, max=19525, avg=89.18, stdev=112.14 00:08:33.001 lat (usec): min=53, max=19532, avg=97.88, stdev=174.67 00:08:33.001 clat percentiles (usec): 00:08:33.001 | 1.00th=[ 52], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 68], 00:08:33.001 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 75], 00:08:33.001 | 70.00th=[ 79], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 151], 00:08:33.001 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 198], 99.95th=[ 204], 00:08:33.001 | 99.99th=[ 302] 00:08:33.001 bw ( KiB/s): min=26104, max=51224, per=34.44%, avg=39353.83, stdev=11382.97, samples=6 00:08:33.001 iops : min= 6526, max=12806, avg=9838.33, stdev=2845.69, samples=6 00:08:33.001 lat (usec) : 50=0.51%, 100=74.72%, 250=24.75%, 500=0.01% 00:08:33.001 lat (msec) : 20=0.01% 00:08:33.001 cpu : usr=2.20%, sys=8.60%, ctx=32992, majf=0, minf=2 00:08:33.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.001 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.001 issued rwts: total=32985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.001 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3680178: Fri Dec 6 16:21:27 2024 00:08:33.001 read: IOPS=6199, BW=24.2MiB/s (25.4MB/s)(69.9MiB/2885msec) 00:08:33.001 slat (usec): min=2, max=7915, avg= 8.46, stdev=83.12 00:08:33.001 clat (usec): min=57, max=387, avg=151.28, stdev=28.73 00:08:33.001 lat (usec): min=61, max=8066, avg=159.74, stdev=87.69 00:08:33.001 clat percentiles (usec): 00:08:33.001 | 1.00th=[ 80], 5.00th=[ 98], 10.00th=[ 109], 20.00th=[ 137], 00:08:33.001 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 157], 00:08:33.001 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 202], 00:08:33.001 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 247], 99.95th=[ 249], 00:08:33.001 | 99.99th=[ 314] 00:08:33.001 bw ( KiB/s): min=22592, max=26080, per=21.31%, avg=24356.80, stdev=1714.78, samples=5 00:08:33.001 iops : min= 5648, max= 6520, avg=6089.20, stdev=428.69, samples=5 00:08:33.001 lat (usec) : 100=5.91%, 250=94.04%, 500=0.04% 00:08:33.001 cpu : usr=1.87%, sys=5.27%, ctx=17889, majf=0, minf=2 00:08:33.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.001 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.001 issued rwts: total=17886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.001 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3680179: Fri Dec 6 16:21:27 2024 00:08:33.001 read: IOPS=6000, BW=23.4MiB/s (24.6MB/s)(63.5MiB/2709msec) 00:08:33.001 slat (nsec): min=6202, max=35866, avg=7609.24, stdev=969.00 00:08:33.001 clat (usec): min=70, max=342, avg=156.41, stdev=24.46 00:08:33.001 lat (usec): min=77, max=349, avg=164.02, stdev=24.50 00:08:33.001 clat percentiles (usec): 00:08:33.001 | 1.00th=[ 97], 5.00th=[ 121], 10.00th=[ 135], 20.00th=[ 141], 00:08:33.001 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 161], 00:08:33.001 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 206], 00:08:33.001 | 99.00th=[ 233], 99.50th=[ 239], 99.90th=[ 249], 99.95th=[ 251], 00:08:33.001 | 99.99th=[ 281] 00:08:33.001 bw ( KiB/s): min=22608, max=26112, per=21.34%, avg=24387.20, stdev=1742.94, samples=5 00:08:33.001 iops : min= 5652, max= 6528, avg=6096.80, stdev=435.74, samples=5 00:08:33.001 lat (usec) : 100=1.59%, 250=98.33%, 500=0.08% 00:08:33.001 cpu : usr=1.48%, sys=5.47%, ctx=16254, majf=0, minf=2 00:08:33.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:33.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.001 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.001 issued rwts: total=16254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.001 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:33.001 00:08:33.001 Run status group 0 (all jobs): 00:08:33.001 READ: bw=112MiB/s (117MB/s), 23.4MiB/s-39.4MiB/s (24.6MB/s-41.3MB/s), io=365MiB (383MB), run=2709-3270msec 00:08:33.001 00:08:33.001 Disk stats (read/write): 00:08:33.001 nvme0n1: ios=23899/0, merge=0/0, ticks=2521/0, in_queue=2521, util=94.79% 00:08:33.001 nvme0n2: ios=30732/0, merge=0/0, ticks=2694/0, in_queue=2694, util=94.71% 00:08:33.001 nvme0n3: ios=17801/0, merge=0/0, ticks=2600/0, in_queue=2600, util=96.13% 00:08:33.001 nvme0n4: ios=15877/0, merge=0/0, ticks=2416/0, in_queue=2416, util=96.46% 00:08:33.258 16:21:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.258 16:21:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:33.258 16:21:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.258 16:21:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:33.521 16:21:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.521 16:21:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:33.777 16:21:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:33.777 16:21:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:34.034 16:21:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:34.034 16:21:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3680017 00:08:34.034 16:21:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:34.034 16:21:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.960 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.960 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:34.960 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:34.960 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.960 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:34.960 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.960 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:34.960 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:34.960 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:34.960 nvmf hotplug test: fio failed as expected 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.961 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:34.961 rmmod nvme_rdma 00:08:34.961 rmmod nvme_fabrics 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3676957 ']' 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3676957 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3676957 ']' 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3676957 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3676957 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3676957' 00:08:35.218 killing process with pid 3676957 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3676957 00:08:35.218 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3676957 00:08:35.475 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.475 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:35.475 00:08:35.475 real 0m24.906s 00:08:35.475 user 2m1.907s 00:08:35.475 sys 0m8.982s 00:08:35.475 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.475 16:21:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:35.475 ************************************ 00:08:35.475 END TEST nvmf_fio_target 00:08:35.475 ************************************ 00:08:35.475 16:21:30 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:08:35.475 16:21:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:35.475 16:21:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.475 16:21:30 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.475 ************************************ 00:08:35.475 START TEST nvmf_bdevio 00:08:35.475 ************************************ 00:08:35.475 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:08:35.475 * Looking for test storage... 00:08:35.475 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:35.475 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.475 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.475 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:35.733 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:35.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.734 --rc genhtml_branch_coverage=1 00:08:35.734 --rc genhtml_function_coverage=1 00:08:35.734 --rc genhtml_legend=1 00:08:35.734 --rc geninfo_all_blocks=1 00:08:35.734 --rc geninfo_unexecuted_blocks=1 00:08:35.734 00:08:35.734 ' 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:35.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.734 --rc genhtml_branch_coverage=1 00:08:35.734 --rc genhtml_function_coverage=1 00:08:35.734 --rc genhtml_legend=1 00:08:35.734 --rc geninfo_all_blocks=1 00:08:35.734 --rc geninfo_unexecuted_blocks=1 00:08:35.734 00:08:35.734 ' 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:35.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.734 --rc genhtml_branch_coverage=1 00:08:35.734 --rc genhtml_function_coverage=1 00:08:35.734 --rc genhtml_legend=1 00:08:35.734 --rc geninfo_all_blocks=1 00:08:35.734 --rc geninfo_unexecuted_blocks=1 00:08:35.734 00:08:35.734 ' 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:35.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.734 --rc genhtml_branch_coverage=1 00:08:35.734 --rc genhtml_function_coverage=1 00:08:35.734 --rc genhtml_legend=1 00:08:35.734 --rc geninfo_all_blocks=1 00:08:35.734 --rc geninfo_unexecuted_blocks=1 00:08:35.734 00:08:35.734 ' 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.734 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.734 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.735 16:21:30 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:08:41.028 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:41.286 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:41.286 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:41.286 Found net devices under 0000:18:00.0: mlx_0_0 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:41.286 Found net devices under 0000:18:00.1: mlx_0_1 00:08:41.286 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:41.287 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.287 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:41.287 altname enp24s0f0np0 00:08:41.287 altname ens785f0np0 00:08:41.287 inet 192.168.100.8/24 scope global mlx_0_0 00:08:41.287 valid_lft forever preferred_lft forever 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:41.287 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:41.287 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:41.287 altname enp24s0f1np1 00:08:41.287 altname ens785f1np1 00:08:41.287 inet 192.168.100.9/24 scope global mlx_0_1 00:08:41.287 valid_lft forever preferred_lft forever 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:41.287 192.168.100.9' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:41.287 192.168.100.9' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:41.287 192.168.100.9' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:41.287 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.288 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.288 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3684652 00:08:41.288 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3684652 00:08:41.288 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:41.288 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3684652 ']' 00:08:41.288 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.288 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.288 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.288 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.288 16:21:35 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.288 [2024-12-06 16:21:35.999045] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:08:41.288 [2024-12-06 16:21:35.999092] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.544 [2024-12-06 16:21:36.057408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.544 [2024-12-06 16:21:36.096596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.544 [2024-12-06 16:21:36.096630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.544 [2024-12-06 16:21:36.096640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.544 [2024-12-06 16:21:36.096646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.544 [2024-12-06 16:21:36.096650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.544 [2024-12-06 16:21:36.098098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:41.544 [2024-12-06 16:21:36.098203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:41.544 [2024-12-06 16:21:36.098312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.544 [2024-12-06 16:21:36.098313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:41.544 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.544 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:41.544 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:41.544 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:41.544 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.544 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.544 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:41.544 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.544 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.544 [2024-12-06 16:21:36.257296] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10279c0/0x102beb0) succeed. 00:08:41.544 [2024-12-06 16:21:36.265515] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1029050/0x106d550) succeed. 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.802 Malloc0 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:41.802 [2024-12-06 16:21:36.430200] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.802 { 00:08:41.802 "params": { 00:08:41.802 "name": "Nvme$subsystem", 00:08:41.802 "trtype": "$TEST_TRANSPORT", 00:08:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.802 "adrfam": "ipv4", 00:08:41.802 "trsvcid": "$NVMF_PORT", 00:08:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.802 "hdgst": ${hdgst:-false}, 00:08:41.802 "ddgst": ${ddgst:-false} 00:08:41.802 }, 00:08:41.802 "method": "bdev_nvme_attach_controller" 00:08:41.802 } 00:08:41.802 EOF 00:08:41.802 )") 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:41.802 16:21:36 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.802 "params": { 00:08:41.802 "name": "Nvme1", 00:08:41.802 "trtype": "rdma", 00:08:41.802 "traddr": "192.168.100.8", 00:08:41.802 "adrfam": "ipv4", 00:08:41.802 "trsvcid": "4420", 00:08:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.802 "hdgst": false, 00:08:41.802 "ddgst": false 00:08:41.802 }, 00:08:41.802 "method": "bdev_nvme_attach_controller" 00:08:41.802 }' 00:08:41.802 [2024-12-06 16:21:36.478278] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:08:41.802 [2024-12-06 16:21:36.478320] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684785 ] 00:08:42.060 [2024-12-06 16:21:36.535925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.060 [2024-12-06 16:21:36.578016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.060 [2024-12-06 16:21:36.578047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.060 [2024-12-06 16:21:36.578048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.060 I/O targets: 00:08:42.060 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:42.060 00:08:42.060 00:08:42.060 CUnit - A unit testing framework for C - Version 2.1-3 00:08:42.060 http://cunit.sourceforge.net/ 00:08:42.060 00:08:42.060 00:08:42.060 Suite: bdevio tests on: Nvme1n1 00:08:42.060 Test: blockdev write read block ...passed 00:08:42.060 Test: blockdev write zeroes read block ...passed 00:08:42.060 Test: blockdev write zeroes read no split ...passed 00:08:42.060 Test: blockdev write zeroes read split ...passed 00:08:42.060 Test: blockdev write zeroes read split partial ...passed 00:08:42.318 Test: blockdev reset ...[2024-12-06 16:21:36.789269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:42.318 [2024-12-06 16:21:36.810897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:42.319 [2024-12-06 16:21:36.838807] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:42.319 passed 00:08:42.319 Test: blockdev write read 8 blocks ...passed 00:08:42.319 Test: blockdev write read size > 128k ...passed 00:08:42.319 Test: blockdev write read invalid size ...passed 00:08:42.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:42.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:42.319 Test: blockdev write read max offset ...passed 00:08:42.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:42.319 Test: blockdev writev readv 8 blocks ...passed 00:08:42.319 Test: blockdev writev readv 30 x 1block ...passed 00:08:42.319 Test: blockdev writev readv block ...passed 00:08:42.319 Test: blockdev writev readv size > 128k ...passed 00:08:42.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:42.319 Test: blockdev comparev and writev ...[2024-12-06 16:21:36.841550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.319 [2024-12-06 16:21:36.841576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:42.319 [2024-12-06 16:21:36.841585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.319 [2024-12-06 16:21:36.841592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:42.319 [2024-12-06 16:21:36.841749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.319 [2024-12-06 16:21:36.841757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:42.319 [2024-12-06 16:21:36.841765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.319 [2024-12-06 16:21:36.841771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:42.319 [2024-12-06 16:21:36.841915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.319 [2024-12-06 16:21:36.841922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:42.319 [2024-12-06 16:21:36.841929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.319 [2024-12-06 16:21:36.841936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:42.319 [2024-12-06 16:21:36.842109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.319 [2024-12-06 16:21:36.842116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:42.319 [2024-12-06 16:21:36.842124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:42.319 [2024-12-06 16:21:36.842130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:42.319 passed 00:08:42.319 Test: blockdev nvme passthru rw ...passed 00:08:42.319 Test: blockdev nvme passthru vendor specific ...[2024-12-06 16:21:36.842389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:08:42.319 [2024-12-06 16:21:36.842399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:42.319 [2024-12-06 16:21:36.842440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:08:42.319 [2024-12-06 16:21:36.842447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:42.319 [2024-12-06 16:21:36.842482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:08:42.319 [2024-12-06 16:21:36.842490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:42.319 [2024-12-06 16:21:36.842532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:08:42.319 [2024-12-06 16:21:36.842540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:42.319 passed 00:08:42.319 Test: blockdev nvme admin passthru ...passed 00:08:42.319 Test: blockdev copy ...passed 00:08:42.319 00:08:42.319 Run Summary: Type Total Ran Passed Failed Inactive 00:08:42.319 suites 1 1 n/a 0 0 00:08:42.319 tests 23 23 23 0 0 00:08:42.319 asserts 152 152 152 0 n/a 00:08:42.319 00:08:42.319 Elapsed time = 0.169 seconds 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:42.319 rmmod nvme_rdma 00:08:42.319 rmmod nvme_fabrics 00:08:42.319 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3684652 ']' 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3684652 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3684652 ']' 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3684652 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3684652 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3684652' 00:08:42.577 killing process with pid 3684652 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3684652 00:08:42.577 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3684652 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:42.836 00:08:42.836 real 0m7.292s 00:08:42.836 user 0m7.691s 00:08:42.836 sys 0m4.757s 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:42.836 ************************************ 00:08:42.836 END TEST nvmf_bdevio 00:08:42.836 ************************************ 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:42.836 00:08:42.836 real 3m46.246s 00:08:42.836 user 10m24.530s 00:08:42.836 sys 1m17.297s 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.836 ************************************ 00:08:42.836 END TEST nvmf_target_core 00:08:42.836 ************************************ 00:08:42.836 16:21:37 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:08:42.836 16:21:37 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.836 16:21:37 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.836 16:21:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:42.836 ************************************ 00:08:42.836 START TEST nvmf_target_extra 00:08:42.836 ************************************ 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:08:42.836 * Looking for test storage... 00:08:42.836 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:42.836 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:43.095 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:43.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.096 --rc genhtml_branch_coverage=1 00:08:43.096 --rc genhtml_function_coverage=1 00:08:43.096 --rc genhtml_legend=1 00:08:43.096 --rc geninfo_all_blocks=1 00:08:43.096 --rc geninfo_unexecuted_blocks=1 00:08:43.096 00:08:43.096 ' 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:43.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.096 --rc genhtml_branch_coverage=1 00:08:43.096 --rc genhtml_function_coverage=1 00:08:43.096 --rc genhtml_legend=1 00:08:43.096 --rc geninfo_all_blocks=1 00:08:43.096 --rc geninfo_unexecuted_blocks=1 00:08:43.096 00:08:43.096 ' 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:43.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.096 --rc genhtml_branch_coverage=1 00:08:43.096 --rc genhtml_function_coverage=1 00:08:43.096 --rc genhtml_legend=1 00:08:43.096 --rc geninfo_all_blocks=1 00:08:43.096 --rc geninfo_unexecuted_blocks=1 00:08:43.096 00:08:43.096 ' 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:43.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.096 --rc genhtml_branch_coverage=1 00:08:43.096 --rc genhtml_function_coverage=1 00:08:43.096 --rc genhtml_legend=1 00:08:43.096 --rc geninfo_all_blocks=1 00:08:43.096 --rc geninfo_unexecuted_blocks=1 00:08:43.096 00:08:43.096 ' 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.096 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:43.096 ************************************ 00:08:43.096 START TEST nvmf_example 00:08:43.096 ************************************ 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:43.096 * Looking for test storage... 00:08:43.096 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:08:43.096 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.391 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:43.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.392 --rc genhtml_branch_coverage=1 00:08:43.392 --rc genhtml_function_coverage=1 00:08:43.392 --rc genhtml_legend=1 00:08:43.392 --rc geninfo_all_blocks=1 00:08:43.392 --rc geninfo_unexecuted_blocks=1 00:08:43.392 00:08:43.392 ' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:43.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.392 --rc genhtml_branch_coverage=1 00:08:43.392 --rc genhtml_function_coverage=1 00:08:43.392 --rc genhtml_legend=1 00:08:43.392 --rc geninfo_all_blocks=1 00:08:43.392 --rc geninfo_unexecuted_blocks=1 00:08:43.392 00:08:43.392 ' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:43.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.392 --rc genhtml_branch_coverage=1 00:08:43.392 --rc genhtml_function_coverage=1 00:08:43.392 --rc genhtml_legend=1 00:08:43.392 --rc geninfo_all_blocks=1 00:08:43.392 --rc geninfo_unexecuted_blocks=1 00:08:43.392 00:08:43.392 ' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:43.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.392 --rc genhtml_branch_coverage=1 00:08:43.392 --rc genhtml_function_coverage=1 00:08:43.392 --rc genhtml_legend=1 00:08:43.392 --rc geninfo_all_blocks=1 00:08:43.392 --rc geninfo_unexecuted_blocks=1 00:08:43.392 00:08:43.392 ' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.392 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:08:43.392 16:21:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.748 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:48.749 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:48.749 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:48.749 Found net devices under 0000:18:00.0: mlx_0_0 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:48.749 Found net devices under 0000:18:00.1: mlx_0_1 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.749 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:48.750 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:48.750 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:49.009 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:49.009 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:49.009 altname enp24s0f0np0 00:08:49.009 altname ens785f0np0 00:08:49.009 inet 192.168.100.8/24 scope global mlx_0_0 00:08:49.009 valid_lft forever preferred_lft forever 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:49.009 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:49.009 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:49.009 altname enp24s0f1np1 00:08:49.009 altname ens785f1np1 00:08:49.009 inet 192.168.100.9/24 scope global mlx_0_1 00:08:49.009 valid_lft forever preferred_lft forever 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:08:49.009 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:49.010 192.168.100.9' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:49.010 192.168.100.9' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:49.010 192.168.100.9' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3688366 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3688366 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3688366 ']' 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.010 16:21:43 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:49.944 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.201 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.201 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.201 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:50.202 16:21:44 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:02.398 Initializing NVMe Controllers 00:09:02.398 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:02.398 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:02.398 Initialization complete. Launching workers. 00:09:02.398 ======================================================== 00:09:02.398 Latency(us) 00:09:02.398 Device Information : IOPS MiB/s Average min max 00:09:02.398 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26031.76 101.69 2460.15 605.08 15983.05 00:09:02.398 ======================================================== 00:09:02.398 Total : 26031.76 101.69 2460.15 605.08 15983.05 00:09:02.398 00:09:02.398 16:21:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:02.398 16:21:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:02.398 16:21:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:02.398 16:21:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:02.398 16:21:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:02.398 16:21:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:02.398 16:21:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:02.398 16:21:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.398 16:21:55 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:02.398 rmmod nvme_rdma 00:09:02.398 rmmod nvme_fabrics 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3688366 ']' 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3688366 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3688366 ']' 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3688366 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3688366 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3688366' 00:09:02.398 killing process with pid 3688366 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3688366 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3688366 00:09:02.398 nvmf threads initialize successfully 00:09:02.398 bdev subsystem init successfully 00:09:02.398 created a nvmf target service 00:09:02.398 create targets's poll groups done 00:09:02.398 all subsystems of target started 00:09:02.398 nvmf target is running 00:09:02.398 all subsystems of target stopped 00:09:02.398 destroy targets's poll groups done 00:09:02.398 destroyed the nvmf target service 00:09:02.398 bdev subsystem finish successfully 00:09:02.398 nvmf threads destroy successfully 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:02.398 00:09:02.398 real 0m18.666s 00:09:02.398 user 0m51.795s 00:09:02.398 sys 0m4.843s 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:02.398 ************************************ 00:09:02.398 END TEST nvmf_example 00:09:02.398 ************************************ 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:02.398 ************************************ 00:09:02.398 START TEST nvmf_filesystem 00:09:02.398 ************************************ 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:02.398 * Looking for test storage... 00:09:02.398 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.398 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:02.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.399 --rc genhtml_branch_coverage=1 00:09:02.399 --rc genhtml_function_coverage=1 00:09:02.399 --rc genhtml_legend=1 00:09:02.399 --rc geninfo_all_blocks=1 00:09:02.399 --rc geninfo_unexecuted_blocks=1 00:09:02.399 00:09:02.399 ' 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:02.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.399 --rc genhtml_branch_coverage=1 00:09:02.399 --rc genhtml_function_coverage=1 00:09:02.399 --rc genhtml_legend=1 00:09:02.399 --rc geninfo_all_blocks=1 00:09:02.399 --rc geninfo_unexecuted_blocks=1 00:09:02.399 00:09:02.399 ' 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:02.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.399 --rc genhtml_branch_coverage=1 00:09:02.399 --rc genhtml_function_coverage=1 00:09:02.399 --rc genhtml_legend=1 00:09:02.399 --rc geninfo_all_blocks=1 00:09:02.399 --rc geninfo_unexecuted_blocks=1 00:09:02.399 00:09:02.399 ' 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:02.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.399 --rc genhtml_branch_coverage=1 00:09:02.399 --rc genhtml_function_coverage=1 00:09:02.399 --rc genhtml_legend=1 00:09:02.399 --rc geninfo_all_blocks=1 00:09:02.399 --rc geninfo_unexecuted_blocks=1 00:09:02.399 00:09:02.399 ' 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:02.399 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:02.400 #define SPDK_CONFIG_H 00:09:02.400 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:02.400 #define SPDK_CONFIG_APPS 1 00:09:02.400 #define SPDK_CONFIG_ARCH native 00:09:02.400 #undef SPDK_CONFIG_ASAN 00:09:02.400 #undef SPDK_CONFIG_AVAHI 00:09:02.400 #undef SPDK_CONFIG_CET 00:09:02.400 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:02.400 #define SPDK_CONFIG_COVERAGE 1 00:09:02.400 #define SPDK_CONFIG_CROSS_PREFIX 00:09:02.400 #undef SPDK_CONFIG_CRYPTO 00:09:02.400 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:02.400 #undef SPDK_CONFIG_CUSTOMOCF 00:09:02.400 #undef SPDK_CONFIG_DAOS 00:09:02.400 #define SPDK_CONFIG_DAOS_DIR 00:09:02.400 #define SPDK_CONFIG_DEBUG 1 00:09:02.400 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:02.400 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:02.400 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:02.400 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:02.400 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:02.400 #undef SPDK_CONFIG_DPDK_UADK 00:09:02.400 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:02.400 #define SPDK_CONFIG_EXAMPLES 1 00:09:02.400 #undef SPDK_CONFIG_FC 00:09:02.400 #define SPDK_CONFIG_FC_PATH 00:09:02.400 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:02.400 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:02.400 #define SPDK_CONFIG_FSDEV 1 00:09:02.400 #undef SPDK_CONFIG_FUSE 00:09:02.400 #undef SPDK_CONFIG_FUZZER 00:09:02.400 #define SPDK_CONFIG_FUZZER_LIB 00:09:02.400 #undef SPDK_CONFIG_GOLANG 00:09:02.400 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:02.400 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:02.400 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:02.400 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:02.400 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:02.400 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:02.400 #undef SPDK_CONFIG_HAVE_LZ4 00:09:02.400 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:02.400 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:02.400 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:02.400 #define SPDK_CONFIG_IDXD 1 00:09:02.400 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:02.400 #undef SPDK_CONFIG_IPSEC_MB 00:09:02.400 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:02.400 #define SPDK_CONFIG_ISAL 1 00:09:02.400 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:02.400 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:02.400 #define SPDK_CONFIG_LIBDIR 00:09:02.400 #undef SPDK_CONFIG_LTO 00:09:02.400 #define SPDK_CONFIG_MAX_LCORES 128 00:09:02.400 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:02.400 #define SPDK_CONFIG_NVME_CUSE 1 00:09:02.400 #undef SPDK_CONFIG_OCF 00:09:02.400 #define SPDK_CONFIG_OCF_PATH 00:09:02.400 #define SPDK_CONFIG_OPENSSL_PATH 00:09:02.400 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:02.400 #define SPDK_CONFIG_PGO_DIR 00:09:02.400 #undef SPDK_CONFIG_PGO_USE 00:09:02.400 #define SPDK_CONFIG_PREFIX /usr/local 00:09:02.400 #undef SPDK_CONFIG_RAID5F 00:09:02.400 #undef SPDK_CONFIG_RBD 00:09:02.400 #define SPDK_CONFIG_RDMA 1 00:09:02.400 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:02.400 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:02.400 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:02.400 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:02.400 #define SPDK_CONFIG_SHARED 1 00:09:02.400 #undef SPDK_CONFIG_SMA 00:09:02.400 #define SPDK_CONFIG_TESTS 1 00:09:02.400 #undef SPDK_CONFIG_TSAN 00:09:02.400 #define SPDK_CONFIG_UBLK 1 00:09:02.400 #define SPDK_CONFIG_UBSAN 1 00:09:02.400 #undef SPDK_CONFIG_UNIT_TESTS 00:09:02.400 #undef SPDK_CONFIG_URING 00:09:02.400 #define SPDK_CONFIG_URING_PATH 00:09:02.400 #undef SPDK_CONFIG_URING_ZNS 00:09:02.400 #undef SPDK_CONFIG_USDT 00:09:02.400 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:02.400 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:02.400 #undef SPDK_CONFIG_VFIO_USER 00:09:02.400 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:02.400 #define SPDK_CONFIG_VHOST 1 00:09:02.400 #define SPDK_CONFIG_VIRTIO 1 00:09:02.400 #undef SPDK_CONFIG_VTUNE 00:09:02.400 #define SPDK_CONFIG_VTUNE_DIR 00:09:02.400 #define SPDK_CONFIG_WERROR 1 00:09:02.400 #define SPDK_CONFIG_WPDK_DIR 00:09:02.400 #undef SPDK_CONFIG_XNVME 00:09:02.400 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:02.400 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:02.401 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:02.402 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3690805 ]] 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3690805 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.JMmb40 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.JMmb40/tests/target /tmp/spdk.JMmb40 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:02.403 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=72532127744 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=78631636992 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6099509248 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39254122496 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39315816448 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=61693952 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=15703224320 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=15726329856 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23105536 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39315058688 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39315820544 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=761856 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=7863148544 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=7863160832 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:02.404 * Looking for test storage... 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=72532127744 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8314101760 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:02.404 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:02.404 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:02.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.405 --rc genhtml_branch_coverage=1 00:09:02.405 --rc genhtml_function_coverage=1 00:09:02.405 --rc genhtml_legend=1 00:09:02.405 --rc geninfo_all_blocks=1 00:09:02.405 --rc geninfo_unexecuted_blocks=1 00:09:02.405 00:09:02.405 ' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:02.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.405 --rc genhtml_branch_coverage=1 00:09:02.405 --rc genhtml_function_coverage=1 00:09:02.405 --rc genhtml_legend=1 00:09:02.405 --rc geninfo_all_blocks=1 00:09:02.405 --rc geninfo_unexecuted_blocks=1 00:09:02.405 00:09:02.405 ' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:02.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.405 --rc genhtml_branch_coverage=1 00:09:02.405 --rc genhtml_function_coverage=1 00:09:02.405 --rc genhtml_legend=1 00:09:02.405 --rc geninfo_all_blocks=1 00:09:02.405 --rc geninfo_unexecuted_blocks=1 00:09:02.405 00:09:02.405 ' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:02.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.405 --rc genhtml_branch_coverage=1 00:09:02.405 --rc genhtml_function_coverage=1 00:09:02.405 --rc genhtml_legend=1 00:09:02.405 --rc geninfo_all_blocks=1 00:09:02.405 --rc geninfo_unexecuted_blocks=1 00:09:02.405 00:09:02.405 ' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.405 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:02.405 16:21:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:07.668 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:07.668 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:07.668 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:07.669 Found net devices under 0000:18:00.0: mlx_0_0 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:07.669 Found net devices under 0000:18:00.1: mlx_0_1 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:07.669 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:07.669 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:07.669 altname enp24s0f0np0 00:09:07.669 altname ens785f0np0 00:09:07.669 inet 192.168.100.8/24 scope global mlx_0_0 00:09:07.669 valid_lft forever preferred_lft forever 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:07.669 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:07.669 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:07.669 altname enp24s0f1np1 00:09:07.669 altname ens785f1np1 00:09:07.669 inet 192.168.100.9/24 scope global mlx_0_1 00:09:07.669 valid_lft forever preferred_lft forever 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:07.669 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:07.670 192.168.100.9' 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:07.670 192.168.100.9' 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:07.670 192.168.100.9' 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:07.670 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:07.927 ************************************ 00:09:07.927 START TEST nvmf_filesystem_no_in_capsule 00:09:07.927 ************************************ 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3694059 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3694059 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3694059 ']' 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.927 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:07.927 [2024-12-06 16:22:02.483571] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:09:07.927 [2024-12-06 16:22:02.483610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.927 [2024-12-06 16:22:02.541369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.927 [2024-12-06 16:22:02.580471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.927 [2024-12-06 16:22:02.580507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.927 [2024-12-06 16:22:02.580513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.927 [2024-12-06 16:22:02.580519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.927 [2024-12-06 16:22:02.580523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.927 [2024-12-06 16:22:02.581739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.927 [2024-12-06 16:22:02.581833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.927 [2024-12-06 16:22:02.581921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.927 [2024-12-06 16:22:02.581922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.185 [2024-12-06 16:22:02.714279] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:08.185 [2024-12-06 16:22:02.733005] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x117f0c0/0x11835b0) succeed. 00:09:08.185 [2024-12-06 16:22:02.741242] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1180750/0x11c4c50) succeed. 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.185 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.444 Malloc1 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.444 [2024-12-06 16:22:02.988781] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.444 16:22:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:08.444 16:22:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.444 16:22:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:08.444 { 00:09:08.444 "name": "Malloc1", 00:09:08.444 "aliases": [ 00:09:08.444 "48fd248b-f987-4e0d-bd22-87e3d11ff4e9" 00:09:08.444 ], 00:09:08.444 "product_name": "Malloc disk", 00:09:08.444 "block_size": 512, 00:09:08.444 "num_blocks": 1048576, 00:09:08.444 "uuid": "48fd248b-f987-4e0d-bd22-87e3d11ff4e9", 00:09:08.444 "assigned_rate_limits": { 00:09:08.444 "rw_ios_per_sec": 0, 00:09:08.444 "rw_mbytes_per_sec": 0, 00:09:08.444 "r_mbytes_per_sec": 0, 00:09:08.444 "w_mbytes_per_sec": 0 00:09:08.444 }, 00:09:08.444 "claimed": true, 00:09:08.444 "claim_type": "exclusive_write", 00:09:08.444 "zoned": false, 00:09:08.444 "supported_io_types": { 00:09:08.444 "read": true, 00:09:08.444 "write": true, 00:09:08.444 "unmap": true, 00:09:08.444 "flush": true, 00:09:08.444 "reset": true, 00:09:08.444 "nvme_admin": false, 00:09:08.444 "nvme_io": false, 00:09:08.444 "nvme_io_md": false, 00:09:08.444 "write_zeroes": true, 00:09:08.444 "zcopy": true, 00:09:08.444 "get_zone_info": false, 00:09:08.444 "zone_management": false, 00:09:08.444 "zone_append": false, 00:09:08.444 "compare": false, 00:09:08.444 "compare_and_write": false, 00:09:08.444 "abort": true, 00:09:08.444 "seek_hole": false, 00:09:08.444 "seek_data": false, 00:09:08.444 "copy": true, 00:09:08.444 "nvme_iov_md": false 00:09:08.444 }, 00:09:08.444 "memory_domains": [ 00:09:08.444 { 00:09:08.444 "dma_device_id": "system", 00:09:08.444 "dma_device_type": 1 00:09:08.444 }, 00:09:08.444 { 00:09:08.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.444 "dma_device_type": 2 00:09:08.444 } 00:09:08.444 ], 00:09:08.444 "driver_specific": {} 00:09:08.444 } 00:09:08.444 ]' 00:09:08.444 16:22:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:08.444 16:22:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:08.444 16:22:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:08.444 16:22:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:08.444 16:22:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:08.444 16:22:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:08.444 16:22:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:08.444 16:22:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:09.379 16:22:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.379 16:22:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:09.379 16:22:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.379 16:22:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:09.379 16:22:04 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:11.912 16:22:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.849 ************************************ 00:09:12.849 START TEST filesystem_ext4 00:09:12.849 ************************************ 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:12.849 mke2fs 1.47.0 (5-Feb-2023) 00:09:12.849 Discarding device blocks: 0/522240 done 00:09:12.849 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:12.849 Filesystem UUID: b5a69b81-6540-4639-94f4-01ccf959d639 00:09:12.849 Superblock backups stored on blocks: 00:09:12.849 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:12.849 00:09:12.849 Allocating group tables: 0/64 done 00:09:12.849 Writing inode tables: 0/64 done 00:09:12.849 Creating journal (8192 blocks): done 00:09:12.849 Writing superblocks and filesystem accounting information: 0/64 done 00:09:12.849 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3694059 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:12.849 00:09:12.849 real 0m0.187s 00:09:12.849 user 0m0.021s 00:09:12.849 sys 0m0.061s 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:12.849 ************************************ 00:09:12.849 END TEST filesystem_ext4 00:09:12.849 ************************************ 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.849 ************************************ 00:09:12.849 START TEST filesystem_btrfs 00:09:12.849 ************************************ 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:12.849 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:12.850 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:12.850 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:12.850 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:12.850 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:12.850 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:12.850 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:12.850 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:13.109 btrfs-progs v6.8.1 00:09:13.109 See https://btrfs.readthedocs.io for more information. 00:09:13.109 00:09:13.109 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:13.109 NOTE: several default settings have changed in version 5.15, please make sure 00:09:13.109 this does not affect your deployments: 00:09:13.109 - DUP for metadata (-m dup) 00:09:13.109 - enabled no-holes (-O no-holes) 00:09:13.109 - enabled free-space-tree (-R free-space-tree) 00:09:13.109 00:09:13.109 Label: (null) 00:09:13.109 UUID: cc8b332a-0f12-4d3f-8c17-a7dedc42d264 00:09:13.109 Node size: 16384 00:09:13.109 Sector size: 4096 (CPU page size: 4096) 00:09:13.109 Filesystem size: 510.00MiB 00:09:13.109 Block group profiles: 00:09:13.109 Data: single 8.00MiB 00:09:13.109 Metadata: DUP 32.00MiB 00:09:13.109 System: DUP 8.00MiB 00:09:13.109 SSD detected: yes 00:09:13.109 Zoned device: no 00:09:13.109 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:13.110 Checksum: crc32c 00:09:13.110 Number of devices: 1 00:09:13.110 Devices: 00:09:13.110 ID SIZE PATH 00:09:13.110 1 510.00MiB /dev/nvme0n1p1 00:09:13.110 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3694059 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:13.110 00:09:13.110 real 0m0.223s 00:09:13.110 user 0m0.027s 00:09:13.110 sys 0m0.106s 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:13.110 ************************************ 00:09:13.110 END TEST filesystem_btrfs 00:09:13.110 ************************************ 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.110 ************************************ 00:09:13.110 START TEST filesystem_xfs 00:09:13.110 ************************************ 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:13.110 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:13.369 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:13.369 = sectsz=512 attr=2, projid32bit=1 00:09:13.369 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:13.369 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:13.369 data = bsize=4096 blocks=130560, imaxpct=25 00:09:13.369 = sunit=0 swidth=0 blks 00:09:13.369 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:13.369 log =internal log bsize=4096 blocks=16384, version=2 00:09:13.369 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:13.369 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:13.369 Discarding blocks...Done. 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3694059 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:13.369 00:09:13.369 real 0m0.184s 00:09:13.369 user 0m0.024s 00:09:13.369 sys 0m0.067s 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.369 16:22:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:13.369 ************************************ 00:09:13.369 END TEST filesystem_xfs 00:09:13.369 ************************************ 00:09:13.369 16:22:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:13.369 16:22:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:13.369 16:22:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.306 16:22:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.306 16:22:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:14.306 16:22:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:14.306 16:22:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3694059 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3694059 ']' 00:09:14.306 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3694059 00:09:14.565 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:14.565 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.565 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3694059 00:09:14.565 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.565 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.565 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3694059' 00:09:14.565 killing process with pid 3694059 00:09:14.565 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3694059 00:09:14.565 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3694059 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:14.824 00:09:14.824 real 0m7.008s 00:09:14.824 user 0m27.326s 00:09:14.824 sys 0m1.007s 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.824 ************************************ 00:09:14.824 END TEST nvmf_filesystem_no_in_capsule 00:09:14.824 ************************************ 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:14.824 ************************************ 00:09:14.824 START TEST nvmf_filesystem_in_capsule 00:09:14.824 ************************************ 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.824 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.825 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.825 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3695580 00:09:14.825 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3695580 00:09:14.825 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3695580 ']' 00:09:14.825 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.825 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.825 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.825 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.825 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:14.825 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.083 [2024-12-06 16:22:09.555224] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:09:15.083 [2024-12-06 16:22:09.555260] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.083 [2024-12-06 16:22:09.612535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.083 [2024-12-06 16:22:09.651543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.083 [2024-12-06 16:22:09.651583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.083 [2024-12-06 16:22:09.651590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.083 [2024-12-06 16:22:09.651595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.083 [2024-12-06 16:22:09.651600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.083 [2024-12-06 16:22:09.652829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.083 [2024-12-06 16:22:09.652850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.083 [2024-12-06 16:22:09.652941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.083 [2024-12-06 16:22:09.652942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.083 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.083 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:15.083 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:15.083 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:15.083 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.083 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.083 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:15.083 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:09:15.083 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.083 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.083 [2024-12-06 16:22:09.803894] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfb10c0/0xfb55b0) succeed. 00:09:15.342 [2024-12-06 16:22:09.812003] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfb2750/0xff6c50) succeed. 00:09:15.342 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.342 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:15.342 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.342 16:22:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.342 Malloc1 00:09:15.342 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.342 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:15.342 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.342 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.342 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.342 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:15.342 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.342 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.601 [2024-12-06 16:22:10.079274] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.601 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:15.601 { 00:09:15.601 "name": "Malloc1", 00:09:15.601 "aliases": [ 00:09:15.601 "c9758985-615a-4792-8b73-9d96f6e88902" 00:09:15.601 ], 00:09:15.601 "product_name": "Malloc disk", 00:09:15.601 "block_size": 512, 00:09:15.601 "num_blocks": 1048576, 00:09:15.602 "uuid": "c9758985-615a-4792-8b73-9d96f6e88902", 00:09:15.602 "assigned_rate_limits": { 00:09:15.602 "rw_ios_per_sec": 0, 00:09:15.602 "rw_mbytes_per_sec": 0, 00:09:15.602 "r_mbytes_per_sec": 0, 00:09:15.602 "w_mbytes_per_sec": 0 00:09:15.602 }, 00:09:15.602 "claimed": true, 00:09:15.602 "claim_type": "exclusive_write", 00:09:15.602 "zoned": false, 00:09:15.602 "supported_io_types": { 00:09:15.602 "read": true, 00:09:15.602 "write": true, 00:09:15.602 "unmap": true, 00:09:15.602 "flush": true, 00:09:15.602 "reset": true, 00:09:15.602 "nvme_admin": false, 00:09:15.602 "nvme_io": false, 00:09:15.602 "nvme_io_md": false, 00:09:15.602 "write_zeroes": true, 00:09:15.602 "zcopy": true, 00:09:15.602 "get_zone_info": false, 00:09:15.602 "zone_management": false, 00:09:15.602 "zone_append": false, 00:09:15.602 "compare": false, 00:09:15.602 "compare_and_write": false, 00:09:15.602 "abort": true, 00:09:15.602 "seek_hole": false, 00:09:15.602 "seek_data": false, 00:09:15.602 "copy": true, 00:09:15.602 "nvme_iov_md": false 00:09:15.602 }, 00:09:15.602 "memory_domains": [ 00:09:15.602 { 00:09:15.602 "dma_device_id": "system", 00:09:15.602 "dma_device_type": 1 00:09:15.602 }, 00:09:15.602 { 00:09:15.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.602 "dma_device_type": 2 00:09:15.602 } 00:09:15.602 ], 00:09:15.602 "driver_specific": {} 00:09:15.602 } 00:09:15.602 ]' 00:09:15.602 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:15.602 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:15.602 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:15.602 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:15.602 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:15.602 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:15.602 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:15.602 16:22:10 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:16.537 16:22:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.537 16:22:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:16.537 16:22:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.537 16:22:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:16.537 16:22:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:18.438 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:18.438 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:18.438 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:18.697 16:22:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.632 ************************************ 00:09:19.632 START TEST filesystem_in_capsule_ext4 00:09:19.632 ************************************ 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:19.632 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:19.632 mke2fs 1.47.0 (5-Feb-2023) 00:09:19.891 Discarding device blocks: 0/522240 done 00:09:19.891 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:19.891 Filesystem UUID: c71b2935-75ed-42aa-af13-694c01cc62ce 00:09:19.891 Superblock backups stored on blocks: 00:09:19.891 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:19.891 00:09:19.891 Allocating group tables: 0/64 done 00:09:19.891 Writing inode tables: 0/64 done 00:09:19.891 Creating journal (8192 blocks): done 00:09:19.891 Writing superblocks and filesystem accounting information: 0/64 done 00:09:19.891 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3695580 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.891 00:09:19.891 real 0m0.181s 00:09:19.891 user 0m0.023s 00:09:19.891 sys 0m0.059s 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:19.891 ************************************ 00:09:19.891 END TEST filesystem_in_capsule_ext4 00:09:19.891 ************************************ 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.891 ************************************ 00:09:19.891 START TEST filesystem_in_capsule_btrfs 00:09:19.891 ************************************ 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:19.891 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:20.149 btrfs-progs v6.8.1 00:09:20.149 See https://btrfs.readthedocs.io for more information. 00:09:20.149 00:09:20.149 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:20.149 NOTE: several default settings have changed in version 5.15, please make sure 00:09:20.149 this does not affect your deployments: 00:09:20.149 - DUP for metadata (-m dup) 00:09:20.149 - enabled no-holes (-O no-holes) 00:09:20.149 - enabled free-space-tree (-R free-space-tree) 00:09:20.149 00:09:20.149 Label: (null) 00:09:20.149 UUID: 067223ef-66a1-4cf8-9e36-6ae688d56d3b 00:09:20.149 Node size: 16384 00:09:20.149 Sector size: 4096 (CPU page size: 4096) 00:09:20.149 Filesystem size: 510.00MiB 00:09:20.149 Block group profiles: 00:09:20.149 Data: single 8.00MiB 00:09:20.149 Metadata: DUP 32.00MiB 00:09:20.149 System: DUP 8.00MiB 00:09:20.149 SSD detected: yes 00:09:20.149 Zoned device: no 00:09:20.149 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:20.149 Checksum: crc32c 00:09:20.149 Number of devices: 1 00:09:20.149 Devices: 00:09:20.149 ID SIZE PATH 00:09:20.149 1 510.00MiB /dev/nvme0n1p1 00:09:20.149 00:09:20.149 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:20.149 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3695580 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:20.150 00:09:20.150 real 0m0.232s 00:09:20.150 user 0m0.022s 00:09:20.150 sys 0m0.115s 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:20.150 ************************************ 00:09:20.150 END TEST filesystem_in_capsule_btrfs 00:09:20.150 ************************************ 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.150 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:20.409 ************************************ 00:09:20.409 START TEST filesystem_in_capsule_xfs 00:09:20.409 ************************************ 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:20.409 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:20.409 = sectsz=512 attr=2, projid32bit=1 00:09:20.409 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:20.409 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:20.409 data = bsize=4096 blocks=130560, imaxpct=25 00:09:20.409 = sunit=0 swidth=0 blks 00:09:20.409 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:20.409 log =internal log bsize=4096 blocks=16384, version=2 00:09:20.409 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:20.409 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:20.409 Discarding blocks...Done. 00:09:20.409 16:22:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3695580 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:20.409 00:09:20.409 real 0m0.192s 00:09:20.409 user 0m0.028s 00:09:20.409 sys 0m0.061s 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:20.409 ************************************ 00:09:20.409 END TEST filesystem_in_capsule_xfs 00:09:20.409 ************************************ 00:09:20.409 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:20.667 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:20.667 16:22:15 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3695580 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3695580 ']' 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3695580 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3695580 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3695580' 00:09:21.601 killing process with pid 3695580 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3695580 00:09:21.601 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3695580 00:09:21.858 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:21.858 00:09:21.858 real 0m7.051s 00:09:21.858 user 0m27.459s 00:09:21.858 sys 0m1.007s 00:09:21.858 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.858 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:21.858 ************************************ 00:09:21.858 END TEST nvmf_filesystem_in_capsule 00:09:21.858 ************************************ 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:22.115 rmmod nvme_rdma 00:09:22.115 rmmod nvme_fabrics 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:22.115 00:09:22.115 real 0m20.222s 00:09:22.115 user 0m56.641s 00:09:22.115 sys 0m6.420s 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:22.115 ************************************ 00:09:22.115 END TEST nvmf_filesystem 00:09:22.115 ************************************ 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:22.115 ************************************ 00:09:22.115 START TEST nvmf_target_discovery 00:09:22.115 ************************************ 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:22.115 * Looking for test storage... 00:09:22.115 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:09:22.115 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:22.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.375 --rc genhtml_branch_coverage=1 00:09:22.375 --rc genhtml_function_coverage=1 00:09:22.375 --rc genhtml_legend=1 00:09:22.375 --rc geninfo_all_blocks=1 00:09:22.375 --rc geninfo_unexecuted_blocks=1 00:09:22.375 00:09:22.375 ' 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:22.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.375 --rc genhtml_branch_coverage=1 00:09:22.375 --rc genhtml_function_coverage=1 00:09:22.375 --rc genhtml_legend=1 00:09:22.375 --rc geninfo_all_blocks=1 00:09:22.375 --rc geninfo_unexecuted_blocks=1 00:09:22.375 00:09:22.375 ' 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:22.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.375 --rc genhtml_branch_coverage=1 00:09:22.375 --rc genhtml_function_coverage=1 00:09:22.375 --rc genhtml_legend=1 00:09:22.375 --rc geninfo_all_blocks=1 00:09:22.375 --rc geninfo_unexecuted_blocks=1 00:09:22.375 00:09:22.375 ' 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:22.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.375 --rc genhtml_branch_coverage=1 00:09:22.375 --rc genhtml_function_coverage=1 00:09:22.375 --rc genhtml_legend=1 00:09:22.375 --rc geninfo_all_blocks=1 00:09:22.375 --rc geninfo_unexecuted_blocks=1 00:09:22.375 00:09:22.375 ' 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.375 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.376 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.376 16:22:16 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.027 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.027 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.027 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.027 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.027 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.027 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.027 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.027 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:29.028 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:29.028 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:29.028 Found net devices under 0000:18:00.0: mlx_0_0 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:29.028 Found net devices under 0000:18:00.1: mlx_0_1 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:29.028 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:29.029 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:29.029 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:29.029 altname enp24s0f0np0 00:09:29.029 altname ens785f0np0 00:09:29.029 inet 192.168.100.8/24 scope global mlx_0_0 00:09:29.029 valid_lft forever preferred_lft forever 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:29.029 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:29.029 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:29.029 altname enp24s0f1np1 00:09:29.029 altname ens785f1np1 00:09:29.029 inet 192.168.100.9/24 scope global mlx_0_1 00:09:29.029 valid_lft forever preferred_lft forever 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:29.029 192.168.100.9' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:29.029 192.168.100.9' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:29.029 192.168.100.9' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3700348 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3700348 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3700348 ']' 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.029 [2024-12-06 16:22:22.773984] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:09:29.029 [2024-12-06 16:22:22.774031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.029 [2024-12-06 16:22:22.832000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.029 [2024-12-06 16:22:22.871226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.029 [2024-12-06 16:22:22.871265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.029 [2024-12-06 16:22:22.871271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.029 [2024-12-06 16:22:22.871276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.029 [2024-12-06 16:22:22.871281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.029 [2024-12-06 16:22:22.872647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.029 [2024-12-06 16:22:22.872766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.029 [2024-12-06 16:22:22.872843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.029 [2024-12-06 16:22:22.872845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:29.029 16:22:22 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.029 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:29.029 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.029 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.029 [2024-12-06 16:22:23.028629] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x196a0c0/0x196e5b0) succeed. 00:09:29.029 [2024-12-06 16:22:23.036809] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x196b750/0x19afc50) succeed. 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 Null1 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 [2024-12-06 16:22:23.189934] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 Null2 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 Null3 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 Null4 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.030 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:09:29.030 00:09:29.030 Discovery Log Number of Records 6, Generation counter 6 00:09:29.030 =====Discovery Log Entry 0====== 00:09:29.030 trtype: rdma 00:09:29.030 adrfam: ipv4 00:09:29.030 subtype: current discovery subsystem 00:09:29.030 treq: not required 00:09:29.030 portid: 0 00:09:29.030 trsvcid: 4420 00:09:29.030 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:29.030 traddr: 192.168.100.8 00:09:29.030 eflags: explicit discovery connections, duplicate discovery information 00:09:29.030 rdma_prtype: not specified 00:09:29.030 rdma_qptype: connected 00:09:29.030 rdma_cms: rdma-cm 00:09:29.030 rdma_pkey: 0x0000 00:09:29.030 =====Discovery Log Entry 1====== 00:09:29.030 trtype: rdma 00:09:29.030 adrfam: ipv4 00:09:29.030 subtype: nvme subsystem 00:09:29.030 treq: not required 00:09:29.030 portid: 0 00:09:29.030 trsvcid: 4420 00:09:29.030 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:29.030 traddr: 192.168.100.8 00:09:29.030 eflags: none 00:09:29.030 rdma_prtype: not specified 00:09:29.030 rdma_qptype: connected 00:09:29.030 rdma_cms: rdma-cm 00:09:29.030 rdma_pkey: 0x0000 00:09:29.030 =====Discovery Log Entry 2====== 00:09:29.030 trtype: rdma 00:09:29.030 adrfam: ipv4 00:09:29.030 subtype: nvme subsystem 00:09:29.030 treq: not required 00:09:29.030 portid: 0 00:09:29.030 trsvcid: 4420 00:09:29.030 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:29.030 traddr: 192.168.100.8 00:09:29.030 eflags: none 00:09:29.030 rdma_prtype: not specified 00:09:29.030 rdma_qptype: connected 00:09:29.030 rdma_cms: rdma-cm 00:09:29.030 rdma_pkey: 0x0000 00:09:29.030 =====Discovery Log Entry 3====== 00:09:29.031 trtype: rdma 00:09:29.031 adrfam: ipv4 00:09:29.031 subtype: nvme subsystem 00:09:29.031 treq: not required 00:09:29.031 portid: 0 00:09:29.031 trsvcid: 4420 00:09:29.031 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:29.031 traddr: 192.168.100.8 00:09:29.031 eflags: none 00:09:29.031 rdma_prtype: not specified 00:09:29.031 rdma_qptype: connected 00:09:29.031 rdma_cms: rdma-cm 00:09:29.031 rdma_pkey: 0x0000 00:09:29.031 =====Discovery Log Entry 4====== 00:09:29.031 trtype: rdma 00:09:29.031 adrfam: ipv4 00:09:29.031 subtype: nvme subsystem 00:09:29.031 treq: not required 00:09:29.031 portid: 0 00:09:29.031 trsvcid: 4420 00:09:29.031 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:29.031 traddr: 192.168.100.8 00:09:29.031 eflags: none 00:09:29.031 rdma_prtype: not specified 00:09:29.031 rdma_qptype: connected 00:09:29.031 rdma_cms: rdma-cm 00:09:29.031 rdma_pkey: 0x0000 00:09:29.031 =====Discovery Log Entry 5====== 00:09:29.031 trtype: rdma 00:09:29.031 adrfam: ipv4 00:09:29.031 subtype: discovery subsystem referral 00:09:29.031 treq: not required 00:09:29.031 portid: 0 00:09:29.031 trsvcid: 4430 00:09:29.031 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:29.031 traddr: 192.168.100.8 00:09:29.031 eflags: none 00:09:29.031 rdma_prtype: unrecognized 00:09:29.031 rdma_qptype: unrecognized 00:09:29.031 rdma_cms: unrecognized 00:09:29.031 rdma_pkey: 0x0000 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:29.031 Perform nvmf subsystem discovery via RPC 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.031 [ 00:09:29.031 { 00:09:29.031 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:29.031 "subtype": "Discovery", 00:09:29.031 "listen_addresses": [ 00:09:29.031 { 00:09:29.031 "trtype": "RDMA", 00:09:29.031 "adrfam": "IPv4", 00:09:29.031 "traddr": "192.168.100.8", 00:09:29.031 "trsvcid": "4420" 00:09:29.031 } 00:09:29.031 ], 00:09:29.031 "allow_any_host": true, 00:09:29.031 "hosts": [] 00:09:29.031 }, 00:09:29.031 { 00:09:29.031 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.031 "subtype": "NVMe", 00:09:29.031 "listen_addresses": [ 00:09:29.031 { 00:09:29.031 "trtype": "RDMA", 00:09:29.031 "adrfam": "IPv4", 00:09:29.031 "traddr": "192.168.100.8", 00:09:29.031 "trsvcid": "4420" 00:09:29.031 } 00:09:29.031 ], 00:09:29.031 "allow_any_host": true, 00:09:29.031 "hosts": [], 00:09:29.031 "serial_number": "SPDK00000000000001", 00:09:29.031 "model_number": "SPDK bdev Controller", 00:09:29.031 "max_namespaces": 32, 00:09:29.031 "min_cntlid": 1, 00:09:29.031 "max_cntlid": 65519, 00:09:29.031 "namespaces": [ 00:09:29.031 { 00:09:29.031 "nsid": 1, 00:09:29.031 "bdev_name": "Null1", 00:09:29.031 "name": "Null1", 00:09:29.031 "nguid": "5C95D70D934E4824AC1D8A47B066AC8F", 00:09:29.031 "uuid": "5c95d70d-934e-4824-ac1d-8a47b066ac8f" 00:09:29.031 } 00:09:29.031 ] 00:09:29.031 }, 00:09:29.031 { 00:09:29.031 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:29.031 "subtype": "NVMe", 00:09:29.031 "listen_addresses": [ 00:09:29.031 { 00:09:29.031 "trtype": "RDMA", 00:09:29.031 "adrfam": "IPv4", 00:09:29.031 "traddr": "192.168.100.8", 00:09:29.031 "trsvcid": "4420" 00:09:29.031 } 00:09:29.031 ], 00:09:29.031 "allow_any_host": true, 00:09:29.031 "hosts": [], 00:09:29.031 "serial_number": "SPDK00000000000002", 00:09:29.031 "model_number": "SPDK bdev Controller", 00:09:29.031 "max_namespaces": 32, 00:09:29.031 "min_cntlid": 1, 00:09:29.031 "max_cntlid": 65519, 00:09:29.031 "namespaces": [ 00:09:29.031 { 00:09:29.031 "nsid": 1, 00:09:29.031 "bdev_name": "Null2", 00:09:29.031 "name": "Null2", 00:09:29.031 "nguid": "785BCF6DF3894AAAA6AB08E95BD6285C", 00:09:29.031 "uuid": "785bcf6d-f389-4aaa-a6ab-08e95bd6285c" 00:09:29.031 } 00:09:29.031 ] 00:09:29.031 }, 00:09:29.031 { 00:09:29.031 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:29.031 "subtype": "NVMe", 00:09:29.031 "listen_addresses": [ 00:09:29.031 { 00:09:29.031 "trtype": "RDMA", 00:09:29.031 "adrfam": "IPv4", 00:09:29.031 "traddr": "192.168.100.8", 00:09:29.031 "trsvcid": "4420" 00:09:29.031 } 00:09:29.031 ], 00:09:29.031 "allow_any_host": true, 00:09:29.031 "hosts": [], 00:09:29.031 "serial_number": "SPDK00000000000003", 00:09:29.031 "model_number": "SPDK bdev Controller", 00:09:29.031 "max_namespaces": 32, 00:09:29.031 "min_cntlid": 1, 00:09:29.031 "max_cntlid": 65519, 00:09:29.031 "namespaces": [ 00:09:29.031 { 00:09:29.031 "nsid": 1, 00:09:29.031 "bdev_name": "Null3", 00:09:29.031 "name": "Null3", 00:09:29.031 "nguid": "A359B6536A6C4C2AA9BE9AB02E6F2B42", 00:09:29.031 "uuid": "a359b653-6a6c-4c2a-a9be-9ab02e6f2b42" 00:09:29.031 } 00:09:29.031 ] 00:09:29.031 }, 00:09:29.031 { 00:09:29.031 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:29.031 "subtype": "NVMe", 00:09:29.031 "listen_addresses": [ 00:09:29.031 { 00:09:29.031 "trtype": "RDMA", 00:09:29.031 "adrfam": "IPv4", 00:09:29.031 "traddr": "192.168.100.8", 00:09:29.031 "trsvcid": "4420" 00:09:29.031 } 00:09:29.031 ], 00:09:29.031 "allow_any_host": true, 00:09:29.031 "hosts": [], 00:09:29.031 "serial_number": "SPDK00000000000004", 00:09:29.031 "model_number": "SPDK bdev Controller", 00:09:29.031 "max_namespaces": 32, 00:09:29.031 "min_cntlid": 1, 00:09:29.031 "max_cntlid": 65519, 00:09:29.031 "namespaces": [ 00:09:29.031 { 00:09:29.031 "nsid": 1, 00:09:29.031 "bdev_name": "Null4", 00:09:29.031 "name": "Null4", 00:09:29.031 "nguid": "E3F1937540944AEDA782D75FE77C64B7", 00:09:29.031 "uuid": "e3f19375-4094-4aed-a782-d75fe77c64b7" 00:09:29.031 } 00:09:29.031 ] 00:09:29.031 } 00:09:29.031 ] 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:29.031 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:29.032 rmmod nvme_rdma 00:09:29.032 rmmod nvme_fabrics 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3700348 ']' 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3700348 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3700348 ']' 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3700348 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3700348 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3700348' 00:09:29.032 killing process with pid 3700348 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3700348 00:09:29.032 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3700348 00:09:29.290 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:29.290 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:29.290 00:09:29.290 real 0m7.186s 00:09:29.290 user 0m5.762s 00:09:29.290 sys 0m4.797s 00:09:29.290 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.290 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.290 ************************************ 00:09:29.290 END TEST nvmf_target_discovery 00:09:29.290 ************************************ 00:09:29.290 16:22:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:29.290 16:22:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.290 16:22:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.290 16:22:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:29.290 ************************************ 00:09:29.290 START TEST nvmf_referrals 00:09:29.290 ************************************ 00:09:29.290 16:22:23 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:29.549 * Looking for test storage... 00:09:29.549 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.549 --rc genhtml_branch_coverage=1 00:09:29.549 --rc genhtml_function_coverage=1 00:09:29.549 --rc genhtml_legend=1 00:09:29.549 --rc geninfo_all_blocks=1 00:09:29.549 --rc geninfo_unexecuted_blocks=1 00:09:29.549 00:09:29.549 ' 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.549 --rc genhtml_branch_coverage=1 00:09:29.549 --rc genhtml_function_coverage=1 00:09:29.549 --rc genhtml_legend=1 00:09:29.549 --rc geninfo_all_blocks=1 00:09:29.549 --rc geninfo_unexecuted_blocks=1 00:09:29.549 00:09:29.549 ' 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.549 --rc genhtml_branch_coverage=1 00:09:29.549 --rc genhtml_function_coverage=1 00:09:29.549 --rc genhtml_legend=1 00:09:29.549 --rc geninfo_all_blocks=1 00:09:29.549 --rc geninfo_unexecuted_blocks=1 00:09:29.549 00:09:29.549 ' 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:29.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.549 --rc genhtml_branch_coverage=1 00:09:29.549 --rc genhtml_function_coverage=1 00:09:29.549 --rc genhtml_legend=1 00:09:29.549 --rc geninfo_all_blocks=1 00:09:29.549 --rc geninfo_unexecuted_blocks=1 00:09:29.549 00:09:29.549 ' 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.549 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.550 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:09:29.550 16:22:24 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:36.115 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:36.115 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:36.115 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:36.116 Found net devices under 0000:18:00.0: mlx_0_0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:36.116 Found net devices under 0000:18:00.1: mlx_0_1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:36.116 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:36.116 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:36.116 altname enp24s0f0np0 00:09:36.116 altname ens785f0np0 00:09:36.116 inet 192.168.100.8/24 scope global mlx_0_0 00:09:36.116 valid_lft forever preferred_lft forever 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:36.116 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:36.116 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:36.116 altname enp24s0f1np1 00:09:36.116 altname ens785f1np1 00:09:36.116 inet 192.168.100.9/24 scope global mlx_0_1 00:09:36.116 valid_lft forever preferred_lft forever 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:36.116 192.168.100.9' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:36.116 192.168.100.9' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:36.116 192.168.100.9' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3703856 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3703856 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3703856 ']' 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.116 [2024-12-06 16:22:29.808520] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:09:36.116 [2024-12-06 16:22:29.808564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.116 [2024-12-06 16:22:29.866147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.116 [2024-12-06 16:22:29.905157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.116 [2024-12-06 16:22:29.905191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.116 [2024-12-06 16:22:29.905197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.116 [2024-12-06 16:22:29.905206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.116 [2024-12-06 16:22:29.905211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.116 [2024-12-06 16:22:29.906404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.116 [2024-12-06 16:22:29.906425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.116 [2024-12-06 16:22:29.906515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.116 [2024-12-06 16:22:29.906517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:09:36.116 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:36.117 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.117 16:22:29 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 [2024-12-06 16:22:30.061315] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13970c0/0x139b5b0) succeed. 00:09:36.117 [2024-12-06 16:22:30.069385] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1398750/0x13dcc50) succeed. 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 [2024-12-06 16:22:30.196060] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:36.117 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:36.375 16:22:30 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.375 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.375 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:36.375 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:36.375 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:36.375 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:36.375 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:36.375 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:36.375 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:36.375 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.632 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:36.890 rmmod nvme_rdma 00:09:36.890 rmmod nvme_fabrics 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3703856 ']' 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3703856 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3703856 ']' 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3703856 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3703856 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3703856' 00:09:36.890 killing process with pid 3703856 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3703856 00:09:36.890 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3703856 00:09:37.148 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.148 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:37.148 00:09:37.148 real 0m7.867s 00:09:37.148 user 0m9.924s 00:09:37.148 sys 0m4.994s 00:09:37.148 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.148 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:37.148 ************************************ 00:09:37.148 END TEST nvmf_referrals 00:09:37.148 ************************************ 00:09:37.148 16:22:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:37.148 16:22:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.148 16:22:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.148 16:22:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:37.407 ************************************ 00:09:37.407 START TEST nvmf_connect_disconnect 00:09:37.407 ************************************ 00:09:37.407 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:37.407 * Looking for test storage... 00:09:37.407 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:37.407 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.407 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.407 16:22:31 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.407 --rc genhtml_branch_coverage=1 00:09:37.407 --rc genhtml_function_coverage=1 00:09:37.407 --rc genhtml_legend=1 00:09:37.407 --rc geninfo_all_blocks=1 00:09:37.407 --rc geninfo_unexecuted_blocks=1 00:09:37.407 00:09:37.407 ' 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.407 --rc genhtml_branch_coverage=1 00:09:37.407 --rc genhtml_function_coverage=1 00:09:37.407 --rc genhtml_legend=1 00:09:37.407 --rc geninfo_all_blocks=1 00:09:37.407 --rc geninfo_unexecuted_blocks=1 00:09:37.407 00:09:37.407 ' 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.407 --rc genhtml_branch_coverage=1 00:09:37.407 --rc genhtml_function_coverage=1 00:09:37.407 --rc genhtml_legend=1 00:09:37.407 --rc geninfo_all_blocks=1 00:09:37.407 --rc geninfo_unexecuted_blocks=1 00:09:37.407 00:09:37.407 ' 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.407 --rc genhtml_branch_coverage=1 00:09:37.407 --rc genhtml_function_coverage=1 00:09:37.407 --rc genhtml_legend=1 00:09:37.407 --rc geninfo_all_blocks=1 00:09:37.407 --rc geninfo_unexecuted_blocks=1 00:09:37.407 00:09:37.407 ' 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.407 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.408 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.408 16:22:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:43.968 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:43.968 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:43.968 Found net devices under 0000:18:00.0: mlx_0_0 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:43.968 Found net devices under 0000:18:00.1: mlx_0_1 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:43.968 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:43.969 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:43.969 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:43.969 altname enp24s0f0np0 00:09:43.969 altname ens785f0np0 00:09:43.969 inet 192.168.100.8/24 scope global mlx_0_0 00:09:43.969 valid_lft forever preferred_lft forever 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:43.969 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:43.969 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:43.969 altname enp24s0f1np1 00:09:43.969 altname ens785f1np1 00:09:43.969 inet 192.168.100.9/24 scope global mlx_0_1 00:09:43.969 valid_lft forever preferred_lft forever 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:43.969 192.168.100.9' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:43.969 192.168.100.9' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:43.969 192.168.100.9' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3707501 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3707501 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3707501 ']' 00:09:43.969 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.970 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.970 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.970 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.970 16:22:37 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.970 [2024-12-06 16:22:37.876882] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:09:43.970 [2024-12-06 16:22:37.876927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.970 [2024-12-06 16:22:37.935734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.970 [2024-12-06 16:22:37.975450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.970 [2024-12-06 16:22:37.975487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.970 [2024-12-06 16:22:37.975494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.970 [2024-12-06 16:22:37.975499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.970 [2024-12-06 16:22:37.975505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.970 [2024-12-06 16:22:37.977004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.970 [2024-12-06 16:22:37.977101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.970 [2024-12-06 16:22:37.977208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.970 [2024-12-06 16:22:37.977210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.970 [2024-12-06 16:22:38.121854] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:43.970 [2024-12-06 16:22:38.140529] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdd60c0/0xdda5b0) succeed. 00:09:43.970 [2024-12-06 16:22:38.148704] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdd7750/0xe1bc50) succeed. 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.970 [2024-12-06 16:22:38.295313] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:43.970 16:22:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:48.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:04.044 rmmod nvme_rdma 00:10:04.044 rmmod nvme_fabrics 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3707501 ']' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3707501 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3707501 ']' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3707501 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3707501 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3707501' 00:10:04.044 killing process with pid 3707501 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3707501 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3707501 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:04.044 00:10:04.044 real 0m26.515s 00:10:04.044 user 1m22.865s 00:10:04.044 sys 0m5.263s 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:04.044 ************************************ 00:10:04.044 END TEST nvmf_connect_disconnect 00:10:04.044 ************************************ 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.044 ************************************ 00:10:04.044 START TEST nvmf_multitarget 00:10:04.044 ************************************ 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:04.044 * Looking for test storage... 00:10:04.044 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.044 --rc genhtml_branch_coverage=1 00:10:04.044 --rc genhtml_function_coverage=1 00:10:04.044 --rc genhtml_legend=1 00:10:04.044 --rc geninfo_all_blocks=1 00:10:04.044 --rc geninfo_unexecuted_blocks=1 00:10:04.044 00:10:04.044 ' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.044 --rc genhtml_branch_coverage=1 00:10:04.044 --rc genhtml_function_coverage=1 00:10:04.044 --rc genhtml_legend=1 00:10:04.044 --rc geninfo_all_blocks=1 00:10:04.044 --rc geninfo_unexecuted_blocks=1 00:10:04.044 00:10:04.044 ' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.044 --rc genhtml_branch_coverage=1 00:10:04.044 --rc genhtml_function_coverage=1 00:10:04.044 --rc genhtml_legend=1 00:10:04.044 --rc geninfo_all_blocks=1 00:10:04.044 --rc geninfo_unexecuted_blocks=1 00:10:04.044 00:10:04.044 ' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.044 --rc genhtml_branch_coverage=1 00:10:04.044 --rc genhtml_function_coverage=1 00:10:04.044 --rc genhtml_legend=1 00:10:04.044 --rc geninfo_all_blocks=1 00:10:04.044 --rc geninfo_unexecuted_blocks=1 00:10:04.044 00:10:04.044 ' 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.044 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.045 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.045 16:22:58 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:10.604 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:10.604 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:10.604 Found net devices under 0000:18:00.0: mlx_0_0 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.604 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:10.605 Found net devices under 0000:18:00.1: mlx_0_1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:10.605 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:10.605 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:10.605 altname enp24s0f0np0 00:10:10.605 altname ens785f0np0 00:10:10.605 inet 192.168.100.8/24 scope global mlx_0_0 00:10:10.605 valid_lft forever preferred_lft forever 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:10.605 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:10.605 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:10.605 altname enp24s0f1np1 00:10:10.605 altname ens785f1np1 00:10:10.605 inet 192.168.100.9/24 scope global mlx_0_1 00:10:10.605 valid_lft forever preferred_lft forever 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:10.605 192.168.100.9' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:10.605 192.168.100.9' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:10.605 192.168.100.9' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3714583 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3714583 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3714583 ']' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:10.605 [2024-12-06 16:23:04.430585] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:10:10.605 [2024-12-06 16:23:04.430641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.605 [2024-12-06 16:23:04.490759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.605 [2024-12-06 16:23:04.532277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.605 [2024-12-06 16:23:04.532312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.605 [2024-12-06 16:23:04.532318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.605 [2024-12-06 16:23:04.532323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.605 [2024-12-06 16:23:04.532328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.605 [2024-12-06 16:23:04.533732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.605 [2024-12-06 16:23:04.533749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.605 [2024-12-06 16:23:04.533844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.605 [2024-12-06 16:23:04.533846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:10.605 "nvmf_tgt_1" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:10.605 "nvmf_tgt_2" 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:10.605 16:23:04 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:10.605 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:10.605 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:10.605 true 00:10:10.605 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:10.605 true 00:10:10.605 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:10.605 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:10.864 rmmod nvme_rdma 00:10:10.864 rmmod nvme_fabrics 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3714583 ']' 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3714583 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3714583 ']' 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3714583 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3714583 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3714583' 00:10:10.864 killing process with pid 3714583 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3714583 00:10:10.864 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3714583 00:10:11.122 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.122 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:11.122 00:10:11.122 real 0m7.135s 00:10:11.122 user 0m6.732s 00:10:11.122 sys 0m4.708s 00:10:11.122 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:11.123 ************************************ 00:10:11.123 END TEST nvmf_multitarget 00:10:11.123 ************************************ 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:11.123 ************************************ 00:10:11.123 START TEST nvmf_rpc 00:10:11.123 ************************************ 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:11.123 * Looking for test storage... 00:10:11.123 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.123 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.382 --rc genhtml_branch_coverage=1 00:10:11.382 --rc genhtml_function_coverage=1 00:10:11.382 --rc genhtml_legend=1 00:10:11.382 --rc geninfo_all_blocks=1 00:10:11.382 --rc geninfo_unexecuted_blocks=1 00:10:11.382 00:10:11.382 ' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.382 --rc genhtml_branch_coverage=1 00:10:11.382 --rc genhtml_function_coverage=1 00:10:11.382 --rc genhtml_legend=1 00:10:11.382 --rc geninfo_all_blocks=1 00:10:11.382 --rc geninfo_unexecuted_blocks=1 00:10:11.382 00:10:11.382 ' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.382 --rc genhtml_branch_coverage=1 00:10:11.382 --rc genhtml_function_coverage=1 00:10:11.382 --rc genhtml_legend=1 00:10:11.382 --rc geninfo_all_blocks=1 00:10:11.382 --rc geninfo_unexecuted_blocks=1 00:10:11.382 00:10:11.382 ' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.382 --rc genhtml_branch_coverage=1 00:10:11.382 --rc genhtml_function_coverage=1 00:10:11.382 --rc genhtml_legend=1 00:10:11.382 --rc geninfo_all_blocks=1 00:10:11.382 --rc geninfo_unexecuted_blocks=1 00:10:11.382 00:10:11.382 ' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.382 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.382 16:23:05 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.640 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:16.641 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:16.641 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:16.641 Found net devices under 0000:18:00.0: mlx_0_0 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:16.641 Found net devices under 0000:18:00.1: mlx_0_1 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:16.641 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:16.641 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:16.641 altname enp24s0f0np0 00:10:16.641 altname ens785f0np0 00:10:16.641 inet 192.168.100.8/24 scope global mlx_0_0 00:10:16.641 valid_lft forever preferred_lft forever 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:16.641 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:16.641 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:16.641 altname enp24s0f1np1 00:10:16.641 altname ens785f1np1 00:10:16.641 inet 192.168.100.9/24 scope global mlx_0_1 00:10:16.641 valid_lft forever preferred_lft forever 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:16.641 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:16.642 192.168.100.9' 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:16.642 192.168.100.9' 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:16.642 192.168.100.9' 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.642 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3718102 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3718102 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3718102 ']' 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.900 [2024-12-06 16:23:11.414393] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:10:16.900 [2024-12-06 16:23:11.414443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.900 [2024-12-06 16:23:11.474338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.900 [2024-12-06 16:23:11.514083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.900 [2024-12-06 16:23:11.514122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.900 [2024-12-06 16:23:11.514129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.900 [2024-12-06 16:23:11.514134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.900 [2024-12-06 16:23:11.514138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.900 [2024-12-06 16:23:11.515565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.900 [2024-12-06 16:23:11.515663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.900 [2024-12-06 16:23:11.515727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.900 [2024-12-06 16:23:11.515728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.900 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.159 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.159 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:17.159 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.159 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.159 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.159 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:17.159 "tick_rate": 2700000000, 00:10:17.159 "poll_groups": [ 00:10:17.159 { 00:10:17.159 "name": "nvmf_tgt_poll_group_000", 00:10:17.159 "admin_qpairs": 0, 00:10:17.159 "io_qpairs": 0, 00:10:17.159 "current_admin_qpairs": 0, 00:10:17.159 "current_io_qpairs": 0, 00:10:17.159 "pending_bdev_io": 0, 00:10:17.159 "completed_nvme_io": 0, 00:10:17.159 "transports": [] 00:10:17.159 }, 00:10:17.160 { 00:10:17.160 "name": "nvmf_tgt_poll_group_001", 00:10:17.160 "admin_qpairs": 0, 00:10:17.160 "io_qpairs": 0, 00:10:17.160 "current_admin_qpairs": 0, 00:10:17.160 "current_io_qpairs": 0, 00:10:17.160 "pending_bdev_io": 0, 00:10:17.160 "completed_nvme_io": 0, 00:10:17.160 "transports": [] 00:10:17.160 }, 00:10:17.160 { 00:10:17.160 "name": "nvmf_tgt_poll_group_002", 00:10:17.160 "admin_qpairs": 0, 00:10:17.160 "io_qpairs": 0, 00:10:17.160 "current_admin_qpairs": 0, 00:10:17.160 "current_io_qpairs": 0, 00:10:17.160 "pending_bdev_io": 0, 00:10:17.160 "completed_nvme_io": 0, 00:10:17.160 "transports": [] 00:10:17.160 }, 00:10:17.160 { 00:10:17.160 "name": "nvmf_tgt_poll_group_003", 00:10:17.160 "admin_qpairs": 0, 00:10:17.160 "io_qpairs": 0, 00:10:17.160 "current_admin_qpairs": 0, 00:10:17.160 "current_io_qpairs": 0, 00:10:17.160 "pending_bdev_io": 0, 00:10:17.160 "completed_nvme_io": 0, 00:10:17.160 "transports": [] 00:10:17.160 } 00:10:17.160 ] 00:10:17.160 }' 00:10:17.160 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:17.160 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:17.160 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:17.160 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:17.160 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:17.160 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:17.160 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:17.160 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:17.160 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.160 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.160 [2024-12-06 16:23:11.776689] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x71b120/0x71f610) succeed. 00:10:17.160 [2024-12-06 16:23:11.785509] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x71c7b0/0x760cb0) succeed. 00:10:17.418 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.418 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:17.418 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.418 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.418 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.418 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:17.418 "tick_rate": 2700000000, 00:10:17.418 "poll_groups": [ 00:10:17.418 { 00:10:17.418 "name": "nvmf_tgt_poll_group_000", 00:10:17.418 "admin_qpairs": 0, 00:10:17.418 "io_qpairs": 0, 00:10:17.418 "current_admin_qpairs": 0, 00:10:17.418 "current_io_qpairs": 0, 00:10:17.418 "pending_bdev_io": 0, 00:10:17.418 "completed_nvme_io": 0, 00:10:17.418 "transports": [ 00:10:17.418 { 00:10:17.418 "trtype": "RDMA", 00:10:17.418 "pending_data_buffer": 0, 00:10:17.418 "devices": [ 00:10:17.418 { 00:10:17.418 "name": "mlx5_0", 00:10:17.418 "polls": 15371, 00:10:17.418 "idle_polls": 15371, 00:10:17.418 "completions": 0, 00:10:17.418 "requests": 0, 00:10:17.418 "request_latency": 0, 00:10:17.418 "pending_free_request": 0, 00:10:17.418 "pending_rdma_read": 0, 00:10:17.418 "pending_rdma_write": 0, 00:10:17.418 "pending_rdma_send": 0, 00:10:17.418 "total_send_wrs": 0, 00:10:17.418 "send_doorbell_updates": 0, 00:10:17.418 "total_recv_wrs": 4096, 00:10:17.418 "recv_doorbell_updates": 1 00:10:17.418 }, 00:10:17.418 { 00:10:17.418 "name": "mlx5_1", 00:10:17.418 "polls": 15371, 00:10:17.418 "idle_polls": 15371, 00:10:17.418 "completions": 0, 00:10:17.418 "requests": 0, 00:10:17.418 "request_latency": 0, 00:10:17.418 "pending_free_request": 0, 00:10:17.418 "pending_rdma_read": 0, 00:10:17.418 "pending_rdma_write": 0, 00:10:17.418 "pending_rdma_send": 0, 00:10:17.418 "total_send_wrs": 0, 00:10:17.418 "send_doorbell_updates": 0, 00:10:17.418 "total_recv_wrs": 4096, 00:10:17.418 "recv_doorbell_updates": 1 00:10:17.418 } 00:10:17.418 ] 00:10:17.418 } 00:10:17.418 ] 00:10:17.418 }, 00:10:17.418 { 00:10:17.418 "name": "nvmf_tgt_poll_group_001", 00:10:17.418 "admin_qpairs": 0, 00:10:17.418 "io_qpairs": 0, 00:10:17.418 "current_admin_qpairs": 0, 00:10:17.418 "current_io_qpairs": 0, 00:10:17.418 "pending_bdev_io": 0, 00:10:17.418 "completed_nvme_io": 0, 00:10:17.418 "transports": [ 00:10:17.418 { 00:10:17.418 "trtype": "RDMA", 00:10:17.418 "pending_data_buffer": 0, 00:10:17.418 "devices": [ 00:10:17.418 { 00:10:17.418 "name": "mlx5_0", 00:10:17.418 "polls": 9496, 00:10:17.418 "idle_polls": 9496, 00:10:17.418 "completions": 0, 00:10:17.418 "requests": 0, 00:10:17.418 "request_latency": 0, 00:10:17.418 "pending_free_request": 0, 00:10:17.418 "pending_rdma_read": 0, 00:10:17.418 "pending_rdma_write": 0, 00:10:17.418 "pending_rdma_send": 0, 00:10:17.418 "total_send_wrs": 0, 00:10:17.418 "send_doorbell_updates": 0, 00:10:17.418 "total_recv_wrs": 4096, 00:10:17.418 "recv_doorbell_updates": 1 00:10:17.418 }, 00:10:17.418 { 00:10:17.418 "name": "mlx5_1", 00:10:17.418 "polls": 9496, 00:10:17.418 "idle_polls": 9496, 00:10:17.418 "completions": 0, 00:10:17.418 "requests": 0, 00:10:17.418 "request_latency": 0, 00:10:17.418 "pending_free_request": 0, 00:10:17.418 "pending_rdma_read": 0, 00:10:17.418 "pending_rdma_write": 0, 00:10:17.418 "pending_rdma_send": 0, 00:10:17.418 "total_send_wrs": 0, 00:10:17.418 "send_doorbell_updates": 0, 00:10:17.418 "total_recv_wrs": 4096, 00:10:17.418 "recv_doorbell_updates": 1 00:10:17.418 } 00:10:17.418 ] 00:10:17.418 } 00:10:17.418 ] 00:10:17.418 }, 00:10:17.418 { 00:10:17.418 "name": "nvmf_tgt_poll_group_002", 00:10:17.418 "admin_qpairs": 0, 00:10:17.418 "io_qpairs": 0, 00:10:17.418 "current_admin_qpairs": 0, 00:10:17.418 "current_io_qpairs": 0, 00:10:17.418 "pending_bdev_io": 0, 00:10:17.418 "completed_nvme_io": 0, 00:10:17.418 "transports": [ 00:10:17.418 { 00:10:17.418 "trtype": "RDMA", 00:10:17.418 "pending_data_buffer": 0, 00:10:17.418 "devices": [ 00:10:17.418 { 00:10:17.418 "name": "mlx5_0", 00:10:17.418 "polls": 5346, 00:10:17.418 "idle_polls": 5346, 00:10:17.418 "completions": 0, 00:10:17.418 "requests": 0, 00:10:17.418 "request_latency": 0, 00:10:17.418 "pending_free_request": 0, 00:10:17.418 "pending_rdma_read": 0, 00:10:17.418 "pending_rdma_write": 0, 00:10:17.418 "pending_rdma_send": 0, 00:10:17.418 "total_send_wrs": 0, 00:10:17.418 "send_doorbell_updates": 0, 00:10:17.419 "total_recv_wrs": 4096, 00:10:17.419 "recv_doorbell_updates": 1 00:10:17.419 }, 00:10:17.419 { 00:10:17.419 "name": "mlx5_1", 00:10:17.419 "polls": 5346, 00:10:17.419 "idle_polls": 5346, 00:10:17.419 "completions": 0, 00:10:17.419 "requests": 0, 00:10:17.419 "request_latency": 0, 00:10:17.419 "pending_free_request": 0, 00:10:17.419 "pending_rdma_read": 0, 00:10:17.419 "pending_rdma_write": 0, 00:10:17.419 "pending_rdma_send": 0, 00:10:17.419 "total_send_wrs": 0, 00:10:17.419 "send_doorbell_updates": 0, 00:10:17.419 "total_recv_wrs": 4096, 00:10:17.419 "recv_doorbell_updates": 1 00:10:17.419 } 00:10:17.419 ] 00:10:17.419 } 00:10:17.419 ] 00:10:17.419 }, 00:10:17.419 { 00:10:17.419 "name": "nvmf_tgt_poll_group_003", 00:10:17.419 "admin_qpairs": 0, 00:10:17.419 "io_qpairs": 0, 00:10:17.419 "current_admin_qpairs": 0, 00:10:17.419 "current_io_qpairs": 0, 00:10:17.419 "pending_bdev_io": 0, 00:10:17.419 "completed_nvme_io": 0, 00:10:17.419 "transports": [ 00:10:17.419 { 00:10:17.419 "trtype": "RDMA", 00:10:17.419 "pending_data_buffer": 0, 00:10:17.419 "devices": [ 00:10:17.419 { 00:10:17.419 "name": "mlx5_0", 00:10:17.419 "polls": 930, 00:10:17.419 "idle_polls": 930, 00:10:17.419 "completions": 0, 00:10:17.419 "requests": 0, 00:10:17.419 "request_latency": 0, 00:10:17.419 "pending_free_request": 0, 00:10:17.419 "pending_rdma_read": 0, 00:10:17.419 "pending_rdma_write": 0, 00:10:17.419 "pending_rdma_send": 0, 00:10:17.419 "total_send_wrs": 0, 00:10:17.419 "send_doorbell_updates": 0, 00:10:17.419 "total_recv_wrs": 4096, 00:10:17.419 "recv_doorbell_updates": 1 00:10:17.419 }, 00:10:17.419 { 00:10:17.419 "name": "mlx5_1", 00:10:17.419 "polls": 930, 00:10:17.419 "idle_polls": 930, 00:10:17.419 "completions": 0, 00:10:17.419 "requests": 0, 00:10:17.419 "request_latency": 0, 00:10:17.419 "pending_free_request": 0, 00:10:17.419 "pending_rdma_read": 0, 00:10:17.419 "pending_rdma_write": 0, 00:10:17.419 "pending_rdma_send": 0, 00:10:17.419 "total_send_wrs": 0, 00:10:17.419 "send_doorbell_updates": 0, 00:10:17.419 "total_recv_wrs": 4096, 00:10:17.419 "recv_doorbell_updates": 1 00:10:17.419 } 00:10:17.419 ] 00:10:17.419 } 00:10:17.419 ] 00:10:17.419 } 00:10:17.419 ] 00:10:17.419 }' 00:10:17.419 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:17.419 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:17.419 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:17.419 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:17.419 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:17.419 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:17.419 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:17.419 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:17.419 16:23:11 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:10:17.419 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.677 Malloc1 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.677 [2024-12-06 16:23:12.214547] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:10:17.677 [2024-12-06 16:23:12.264542] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:10:17.677 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:17.677 could not add new controller: failed to write to nvme-fabrics device 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.677 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.678 16:23:12 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:18.611 16:23:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.611 16:23:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:18.611 16:23:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.611 16:23:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:18.611 16:23:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:21.137 16:23:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:21.137 16:23:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:21.137 16:23:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.137 16:23:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:21.137 16:23:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.137 16:23:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:21.137 16:23:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:21.702 [2024-12-06 16:23:16.325737] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:10:21.702 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:21.702 could not add new controller: failed to write to nvme-fabrics device 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.702 16:23:16 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:22.635 16:23:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.635 16:23:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:22.635 16:23:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.635 16:23:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:22.635 16:23:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:24.642 16:23:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:24.642 16:23:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:24.642 16:23:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.642 16:23:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:24.642 16:23:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.642 16:23:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:24.642 16:23:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 [2024-12-06 16:23:20.361239] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.015 16:23:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:26.950 16:23:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.950 16:23:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:26.950 16:23:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.950 16:23:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:26.950 16:23:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:28.849 16:23:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:28.849 16:23:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:28.849 16:23:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.849 16:23:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:28.849 16:23:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.849 16:23:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:28.849 16:23:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:29.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.784 [2024-12-06 16:23:24.379438] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.784 16:23:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:30.717 16:23:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.717 16:23:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:30.717 16:23:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.717 16:23:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:30.717 16:23:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:33.274 16:23:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:33.274 16:23:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:33.274 16:23:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.274 16:23:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:33.274 16:23:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.274 16:23:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:33.274 16:23:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:33.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.838 [2024-12-06 16:23:28.428077] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.838 16:23:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:34.770 16:23:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.770 16:23:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:34.770 16:23:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.770 16:23:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:34.770 16:23:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:37.304 16:23:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:37.304 16:23:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:37.304 16:23:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.304 16:23:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:37.304 16:23:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.304 16:23:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:37.304 16:23:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.872 [2024-12-06 16:23:32.441992] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:37.872 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.873 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:37.873 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.873 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.873 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.873 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:37.873 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.873 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.873 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.873 16:23:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:38.805 16:23:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:38.805 16:23:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:38.805 16:23:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.805 16:23:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:38.805 16:23:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:40.703 16:23:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:40.961 16:23:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:40.961 16:23:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.961 16:23:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:40.961 16:23:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.961 16:23:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:40.961 16:23:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.892 [2024-12-06 16:23:36.444872] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.892 16:23:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:42.823 16:23:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.823 16:23:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:42.823 16:23:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.823 16:23:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:42.823 16:23:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:44.718 16:23:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:44.718 16:23:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:44.718 16:23:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.975 16:23:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:44.975 16:23:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.975 16:23:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:44.975 16:23:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.906 [2024-12-06 16:23:40.445722] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:45.906 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 [2024-12-06 16:23:40.493902] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 [2024-12-06 16:23:40.542069] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 [2024-12-06 16:23:40.590219] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.907 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.165 [2024-12-06 16:23:40.638407] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.165 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:46.165 "tick_rate": 2700000000, 00:10:46.165 "poll_groups": [ 00:10:46.165 { 00:10:46.165 "name": "nvmf_tgt_poll_group_000", 00:10:46.165 "admin_qpairs": 2, 00:10:46.165 "io_qpairs": 27, 00:10:46.165 "current_admin_qpairs": 0, 00:10:46.165 "current_io_qpairs": 0, 00:10:46.165 "pending_bdev_io": 0, 00:10:46.165 "completed_nvme_io": 28, 00:10:46.165 "transports": [ 00:10:46.165 { 00:10:46.165 "trtype": "RDMA", 00:10:46.165 "pending_data_buffer": 0, 00:10:46.165 "devices": [ 00:10:46.165 { 00:10:46.165 "name": "mlx5_0", 00:10:46.165 "polls": 3743096, 00:10:46.165 "idle_polls": 3742927, 00:10:46.165 "completions": 169, 00:10:46.165 "requests": 84, 00:10:46.165 "request_latency": 9070298, 00:10:46.165 "pending_free_request": 0, 00:10:46.165 "pending_rdma_read": 0, 00:10:46.165 "pending_rdma_write": 0, 00:10:46.165 "pending_rdma_send": 0, 00:10:46.165 "total_send_wrs": 111, 00:10:46.165 "send_doorbell_updates": 85, 00:10:46.166 "total_recv_wrs": 4180, 00:10:46.166 "recv_doorbell_updates": 85 00:10:46.166 }, 00:10:46.166 { 00:10:46.166 "name": "mlx5_1", 00:10:46.166 "polls": 3743096, 00:10:46.166 "idle_polls": 3743096, 00:10:46.166 "completions": 0, 00:10:46.166 "requests": 0, 00:10:46.166 "request_latency": 0, 00:10:46.166 "pending_free_request": 0, 00:10:46.166 "pending_rdma_read": 0, 00:10:46.166 "pending_rdma_write": 0, 00:10:46.166 "pending_rdma_send": 0, 00:10:46.166 "total_send_wrs": 0, 00:10:46.166 "send_doorbell_updates": 0, 00:10:46.166 "total_recv_wrs": 4096, 00:10:46.166 "recv_doorbell_updates": 1 00:10:46.166 } 00:10:46.166 ] 00:10:46.166 } 00:10:46.166 ] 00:10:46.166 }, 00:10:46.166 { 00:10:46.166 "name": "nvmf_tgt_poll_group_001", 00:10:46.166 "admin_qpairs": 2, 00:10:46.166 "io_qpairs": 26, 00:10:46.166 "current_admin_qpairs": 0, 00:10:46.166 "current_io_qpairs": 0, 00:10:46.166 "pending_bdev_io": 0, 00:10:46.166 "completed_nvme_io": 224, 00:10:46.166 "transports": [ 00:10:46.166 { 00:10:46.166 "trtype": "RDMA", 00:10:46.166 "pending_data_buffer": 0, 00:10:46.166 "devices": [ 00:10:46.166 { 00:10:46.166 "name": "mlx5_0", 00:10:46.166 "polls": 3565974, 00:10:46.166 "idle_polls": 3565498, 00:10:46.166 "completions": 560, 00:10:46.166 "requests": 280, 00:10:46.166 "request_latency": 69866672, 00:10:46.166 "pending_free_request": 0, 00:10:46.166 "pending_rdma_read": 0, 00:10:46.166 "pending_rdma_write": 0, 00:10:46.166 "pending_rdma_send": 0, 00:10:46.166 "total_send_wrs": 504, 00:10:46.166 "send_doorbell_updates": 229, 00:10:46.166 "total_recv_wrs": 4376, 00:10:46.166 "recv_doorbell_updates": 230 00:10:46.166 }, 00:10:46.166 { 00:10:46.166 "name": "mlx5_1", 00:10:46.166 "polls": 3565974, 00:10:46.166 "idle_polls": 3565974, 00:10:46.166 "completions": 0, 00:10:46.166 "requests": 0, 00:10:46.166 "request_latency": 0, 00:10:46.166 "pending_free_request": 0, 00:10:46.166 "pending_rdma_read": 0, 00:10:46.166 "pending_rdma_write": 0, 00:10:46.166 "pending_rdma_send": 0, 00:10:46.166 "total_send_wrs": 0, 00:10:46.166 "send_doorbell_updates": 0, 00:10:46.166 "total_recv_wrs": 4096, 00:10:46.166 "recv_doorbell_updates": 1 00:10:46.166 } 00:10:46.166 ] 00:10:46.166 } 00:10:46.166 ] 00:10:46.166 }, 00:10:46.166 { 00:10:46.166 "name": "nvmf_tgt_poll_group_002", 00:10:46.166 "admin_qpairs": 1, 00:10:46.166 "io_qpairs": 26, 00:10:46.166 "current_admin_qpairs": 0, 00:10:46.166 "current_io_qpairs": 0, 00:10:46.166 "pending_bdev_io": 0, 00:10:46.166 "completed_nvme_io": 126, 00:10:46.166 "transports": [ 00:10:46.166 { 00:10:46.166 "trtype": "RDMA", 00:10:46.166 "pending_data_buffer": 0, 00:10:46.166 "devices": [ 00:10:46.166 { 00:10:46.166 "name": "mlx5_0", 00:10:46.166 "polls": 3664598, 00:10:46.166 "idle_polls": 3664327, 00:10:46.166 "completions": 311, 00:10:46.166 "requests": 155, 00:10:46.166 "request_latency": 35514508, 00:10:46.166 "pending_free_request": 0, 00:10:46.166 "pending_rdma_read": 0, 00:10:46.166 "pending_rdma_write": 0, 00:10:46.166 "pending_rdma_send": 0, 00:10:46.166 "total_send_wrs": 269, 00:10:46.166 "send_doorbell_updates": 133, 00:10:46.166 "total_recv_wrs": 4251, 00:10:46.166 "recv_doorbell_updates": 133 00:10:46.166 }, 00:10:46.166 { 00:10:46.166 "name": "mlx5_1", 00:10:46.166 "polls": 3664598, 00:10:46.166 "idle_polls": 3664598, 00:10:46.166 "completions": 0, 00:10:46.166 "requests": 0, 00:10:46.166 "request_latency": 0, 00:10:46.166 "pending_free_request": 0, 00:10:46.166 "pending_rdma_read": 0, 00:10:46.166 "pending_rdma_write": 0, 00:10:46.166 "pending_rdma_send": 0, 00:10:46.166 "total_send_wrs": 0, 00:10:46.166 "send_doorbell_updates": 0, 00:10:46.166 "total_recv_wrs": 4096, 00:10:46.166 "recv_doorbell_updates": 1 00:10:46.166 } 00:10:46.166 ] 00:10:46.166 } 00:10:46.166 ] 00:10:46.166 }, 00:10:46.166 { 00:10:46.166 "name": "nvmf_tgt_poll_group_003", 00:10:46.166 "admin_qpairs": 2, 00:10:46.166 "io_qpairs": 26, 00:10:46.166 "current_admin_qpairs": 0, 00:10:46.166 "current_io_qpairs": 0, 00:10:46.166 "pending_bdev_io": 0, 00:10:46.166 "completed_nvme_io": 77, 00:10:46.166 "transports": [ 00:10:46.166 { 00:10:46.166 "trtype": "RDMA", 00:10:46.166 "pending_data_buffer": 0, 00:10:46.166 "devices": [ 00:10:46.166 { 00:10:46.166 "name": "mlx5_0", 00:10:46.166 "polls": 2939869, 00:10:46.166 "idle_polls": 2939631, 00:10:46.166 "completions": 262, 00:10:46.166 "requests": 131, 00:10:46.166 "request_latency": 24876898, 00:10:46.166 "pending_free_request": 0, 00:10:46.166 "pending_rdma_read": 0, 00:10:46.166 "pending_rdma_write": 0, 00:10:46.166 "pending_rdma_send": 0, 00:10:46.166 "total_send_wrs": 206, 00:10:46.166 "send_doorbell_updates": 119, 00:10:46.166 "total_recv_wrs": 4227, 00:10:46.166 "recv_doorbell_updates": 120 00:10:46.166 }, 00:10:46.166 { 00:10:46.166 "name": "mlx5_1", 00:10:46.166 "polls": 2939869, 00:10:46.166 "idle_polls": 2939869, 00:10:46.166 "completions": 0, 00:10:46.166 "requests": 0, 00:10:46.166 "request_latency": 0, 00:10:46.166 "pending_free_request": 0, 00:10:46.166 "pending_rdma_read": 0, 00:10:46.166 "pending_rdma_write": 0, 00:10:46.166 "pending_rdma_send": 0, 00:10:46.166 "total_send_wrs": 0, 00:10:46.166 "send_doorbell_updates": 0, 00:10:46.166 "total_recv_wrs": 4096, 00:10:46.166 "recv_doorbell_updates": 1 00:10:46.166 } 00:10:46.166 ] 00:10:46.166 } 00:10:46.166 ] 00:10:46.166 } 00:10:46.166 ] 00:10:46.166 }' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1302 > 0 )) 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 139328376 > 0 )) 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.166 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:46.424 rmmod nvme_rdma 00:10:46.424 rmmod nvme_fabrics 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3718102 ']' 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3718102 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3718102 ']' 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3718102 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3718102 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3718102' 00:10:46.424 killing process with pid 3718102 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3718102 00:10:46.424 16:23:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3718102 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:46.682 00:10:46.682 real 0m35.560s 00:10:46.682 user 2m0.230s 00:10:46.682 sys 0m5.618s 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.682 ************************************ 00:10:46.682 END TEST nvmf_rpc 00:10:46.682 ************************************ 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.682 ************************************ 00:10:46.682 START TEST nvmf_invalid 00:10:46.682 ************************************ 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:10:46.682 * Looking for test storage... 00:10:46.682 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:10:46.682 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:46.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.940 --rc genhtml_branch_coverage=1 00:10:46.940 --rc genhtml_function_coverage=1 00:10:46.940 --rc genhtml_legend=1 00:10:46.940 --rc geninfo_all_blocks=1 00:10:46.940 --rc geninfo_unexecuted_blocks=1 00:10:46.940 00:10:46.940 ' 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:46.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.940 --rc genhtml_branch_coverage=1 00:10:46.940 --rc genhtml_function_coverage=1 00:10:46.940 --rc genhtml_legend=1 00:10:46.940 --rc geninfo_all_blocks=1 00:10:46.940 --rc geninfo_unexecuted_blocks=1 00:10:46.940 00:10:46.940 ' 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:46.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.940 --rc genhtml_branch_coverage=1 00:10:46.940 --rc genhtml_function_coverage=1 00:10:46.940 --rc genhtml_legend=1 00:10:46.940 --rc geninfo_all_blocks=1 00:10:46.940 --rc geninfo_unexecuted_blocks=1 00:10:46.940 00:10:46.940 ' 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:46.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.940 --rc genhtml_branch_coverage=1 00:10:46.940 --rc genhtml_function_coverage=1 00:10:46.940 --rc genhtml_legend=1 00:10:46.940 --rc geninfo_all_blocks=1 00:10:46.940 --rc geninfo_unexecuted_blocks=1 00:10:46.940 00:10:46.940 ' 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.940 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.941 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.941 16:23:41 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:52.198 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:52.198 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:52.198 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:52.199 Found net devices under 0000:18:00.0: mlx_0_0 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:52.199 Found net devices under 0000:18:00.1: mlx_0_1 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:52.199 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.199 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:52.199 altname enp24s0f0np0 00:10:52.199 altname ens785f0np0 00:10:52.199 inet 192.168.100.8/24 scope global mlx_0_0 00:10:52.199 valid_lft forever preferred_lft forever 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:52.199 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.199 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:52.199 altname enp24s0f1np1 00:10:52.199 altname ens785f1np1 00:10:52.199 inet 192.168.100.9/24 scope global mlx_0_1 00:10:52.199 valid_lft forever preferred_lft forever 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:52.199 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:52.200 192.168.100.9' 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:52.200 192.168.100.9' 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:52.200 192.168.100.9' 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3726794 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3726794 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3726794 ']' 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.200 [2024-12-06 16:23:46.657494] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:10:52.200 [2024-12-06 16:23:46.657538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.200 [2024-12-06 16:23:46.716713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.200 [2024-12-06 16:23:46.756023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.200 [2024-12-06 16:23:46.756060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.200 [2024-12-06 16:23:46.756067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.200 [2024-12-06 16:23:46.756072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.200 [2024-12-06 16:23:46.756077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.200 [2024-12-06 16:23:46.757296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.200 [2024-12-06 16:23:46.757317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.200 [2024-12-06 16:23:46.757390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.200 [2024-12-06 16:23:46.757393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:52.200 16:23:46 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30481 00:10:52.458 [2024-12-06 16:23:47.050557] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:52.458 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:52.458 { 00:10:52.458 "nqn": "nqn.2016-06.io.spdk:cnode30481", 00:10:52.458 "tgt_name": "foobar", 00:10:52.458 "method": "nvmf_create_subsystem", 00:10:52.458 "req_id": 1 00:10:52.458 } 00:10:52.458 Got JSON-RPC error response 00:10:52.458 response: 00:10:52.458 { 00:10:52.458 "code": -32603, 00:10:52.458 "message": "Unable to find target foobar" 00:10:52.458 }' 00:10:52.458 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:52.458 { 00:10:52.458 "nqn": "nqn.2016-06.io.spdk:cnode30481", 00:10:52.458 "tgt_name": "foobar", 00:10:52.458 "method": "nvmf_create_subsystem", 00:10:52.458 "req_id": 1 00:10:52.458 } 00:10:52.458 Got JSON-RPC error response 00:10:52.458 response: 00:10:52.458 { 00:10:52.458 "code": -32603, 00:10:52.458 "message": "Unable to find target foobar" 00:10:52.458 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:52.458 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:52.458 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32429 00:10:52.715 [2024-12-06 16:23:47.243184] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32429: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:52.715 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:52.715 { 00:10:52.715 "nqn": "nqn.2016-06.io.spdk:cnode32429", 00:10:52.715 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:52.715 "method": "nvmf_create_subsystem", 00:10:52.715 "req_id": 1 00:10:52.715 } 00:10:52.715 Got JSON-RPC error response 00:10:52.715 response: 00:10:52.715 { 00:10:52.715 "code": -32602, 00:10:52.715 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:52.715 }' 00:10:52.715 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:52.715 { 00:10:52.715 "nqn": "nqn.2016-06.io.spdk:cnode32429", 00:10:52.715 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:52.715 "method": "nvmf_create_subsystem", 00:10:52.715 "req_id": 1 00:10:52.715 } 00:10:52.715 Got JSON-RPC error response 00:10:52.715 response: 00:10:52.715 { 00:10:52.715 "code": -32602, 00:10:52.715 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:52.715 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:52.715 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:52.716 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24663 00:10:52.716 [2024-12-06 16:23:47.435804] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24663: invalid model number 'SPDK_Controller' 00:10:52.974 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:52.974 { 00:10:52.974 "nqn": "nqn.2016-06.io.spdk:cnode24663", 00:10:52.974 "model_number": "SPDK_Controller\u001f", 00:10:52.974 "method": "nvmf_create_subsystem", 00:10:52.974 "req_id": 1 00:10:52.974 } 00:10:52.974 Got JSON-RPC error response 00:10:52.974 response: 00:10:52.974 { 00:10:52.974 "code": -32602, 00:10:52.974 "message": "Invalid MN SPDK_Controller\u001f" 00:10:52.974 }' 00:10:52.974 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:52.974 { 00:10:52.974 "nqn": "nqn.2016-06.io.spdk:cnode24663", 00:10:52.974 "model_number": "SPDK_Controller\u001f", 00:10:52.974 "method": "nvmf_create_subsystem", 00:10:52.974 "req_id": 1 00:10:52.974 } 00:10:52.974 Got JSON-RPC error response 00:10:52.974 response: 00:10:52.974 { 00:10:52.975 "code": -32602, 00:10:52.975 "message": "Invalid MN SPDK_Controller\u001f" 00:10:52.975 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:52.975 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:52.976 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:52.976 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:10:52.976 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ','\''k+&,sp<;K5!HW|&"h9:' 00:10:52.976 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ','\''k+&,sp<;K5!HW|&"h9:' nqn.2016-06.io.spdk:cnode5901 00:10:53.234 [2024-12-06 16:23:47.760829] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5901: invalid serial number ','k+&,sp<;K5!HW|&"h9:' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:53.234 { 00:10:53.234 "nqn": "nqn.2016-06.io.spdk:cnode5901", 00:10:53.234 "serial_number": ",'\''k+&,sp<;K5!HW|&\"h9:", 00:10:53.234 "method": "nvmf_create_subsystem", 00:10:53.234 "req_id": 1 00:10:53.234 } 00:10:53.234 Got JSON-RPC error response 00:10:53.234 response: 00:10:53.234 { 00:10:53.234 "code": -32602, 00:10:53.234 "message": "Invalid SN ,'\''k+&,sp<;K5!HW|&\"h9:" 00:10:53.234 }' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:53.234 { 00:10:53.234 "nqn": "nqn.2016-06.io.spdk:cnode5901", 00:10:53.234 "serial_number": ",'k+&,sp<;K5!HW|&\"h9:", 00:10:53.234 "method": "nvmf_create_subsystem", 00:10:53.234 "req_id": 1 00:10:53.234 } 00:10:53.234 Got JSON-RPC error response 00:10:53.234 response: 00:10:53.234 { 00:10:53.234 "code": -32602, 00:10:53.234 "message": "Invalid SN ,'k+&,sp<;K5!HW|&\"h9:" 00:10:53.234 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:53.234 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:53.235 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:53.493 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:47 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ == \- ]] 00:10:53.494 16:23:48 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ' oM;aK3H!=OvMKfq[1PN4\Z-C,p$}$!/rY# ver2_l ? ver1_l : ver2_l) )) 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:56.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.074 --rc genhtml_branch_coverage=1 00:10:56.074 --rc genhtml_function_coverage=1 00:10:56.074 --rc genhtml_legend=1 00:10:56.074 --rc geninfo_all_blocks=1 00:10:56.074 --rc geninfo_unexecuted_blocks=1 00:10:56.074 00:10:56.074 ' 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:56.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.074 --rc genhtml_branch_coverage=1 00:10:56.074 --rc genhtml_function_coverage=1 00:10:56.074 --rc genhtml_legend=1 00:10:56.074 --rc geninfo_all_blocks=1 00:10:56.074 --rc geninfo_unexecuted_blocks=1 00:10:56.074 00:10:56.074 ' 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:56.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.074 --rc genhtml_branch_coverage=1 00:10:56.074 --rc genhtml_function_coverage=1 00:10:56.074 --rc genhtml_legend=1 00:10:56.074 --rc geninfo_all_blocks=1 00:10:56.074 --rc geninfo_unexecuted_blocks=1 00:10:56.074 00:10:56.074 ' 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:56.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.074 --rc genhtml_branch_coverage=1 00:10:56.074 --rc genhtml_function_coverage=1 00:10:56.074 --rc genhtml_legend=1 00:10:56.074 --rc geninfo_all_blocks=1 00:10:56.074 --rc geninfo_unexecuted_blocks=1 00:10:56.074 00:10:56.074 ' 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.074 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.075 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:10:56.075 16:23:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:02.631 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:02.631 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:02.631 Found net devices under 0000:18:00.0: mlx_0_0 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:02.631 Found net devices under 0000:18:00.1: mlx_0_1 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:02.631 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:02.632 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:02.632 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:02.632 altname enp24s0f0np0 00:11:02.632 altname ens785f0np0 00:11:02.632 inet 192.168.100.8/24 scope global mlx_0_0 00:11:02.632 valid_lft forever preferred_lft forever 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:02.632 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:02.632 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:02.632 altname enp24s0f1np1 00:11:02.632 altname ens785f1np1 00:11:02.632 inet 192.168.100.9/24 scope global mlx_0_1 00:11:02.632 valid_lft forever preferred_lft forever 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:02.632 192.168.100.9' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:02.632 192.168.100.9' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:02.632 192.168.100.9' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3731004 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3731004 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3731004 ']' 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.632 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.632 [2024-12-06 16:23:56.550979] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:11:02.632 [2024-12-06 16:23:56.551025] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.632 [2024-12-06 16:23:56.610149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:02.633 [2024-12-06 16:23:56.648736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.633 [2024-12-06 16:23:56.648773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.633 [2024-12-06 16:23:56.648782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.633 [2024-12-06 16:23:56.648789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.633 [2024-12-06 16:23:56.648794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.633 [2024-12-06 16:23:56.650026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.633 [2024-12-06 16:23:56.650110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.633 [2024-12-06 16:23:56.650113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.633 [2024-12-06 16:23:56.812327] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1519800/0x151dcf0) succeed. 00:11:02.633 [2024-12-06 16:23:56.820501] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x151adf0/0x155f390) succeed. 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.633 [2024-12-06 16:23:56.927017] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.633 NULL1 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3731121 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:56 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.633 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.281 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.281 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:03.281 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.281 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.281 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.281 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.281 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:03.281 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.281 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.281 16:23:57 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.847 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.847 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:03.847 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.847 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.847 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.104 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.104 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:04.104 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.104 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.104 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.362 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.362 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:04.362 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.362 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.362 16:23:58 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.620 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.620 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:04.620 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.620 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.620 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.184 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.184 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:05.184 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.184 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.184 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.441 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.441 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:05.441 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.441 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.441 16:23:59 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.698 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.698 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:05.698 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.698 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.698 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.956 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.956 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:05.956 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.956 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.956 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.214 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.214 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:06.214 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.214 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.214 16:24:00 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.778 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.778 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:06.778 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.778 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.778 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.035 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.035 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:07.035 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.035 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.035 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.292 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.292 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:07.292 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.292 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.292 16:24:01 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.550 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.550 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:07.550 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.550 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.550 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.807 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.807 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:07.807 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.807 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.807 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.371 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.371 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:08.371 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.371 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.371 16:24:02 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.628 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.628 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:08.628 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.628 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.628 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.885 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.885 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:08.885 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:08.885 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.885 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.143 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.143 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:09.143 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.143 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.143 16:24:03 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.400 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.400 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:09.400 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.400 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.400 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.966 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.966 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:09.966 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:09.966 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.966 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.224 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.224 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:10.224 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.224 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.224 16:24:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.481 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.481 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:10.481 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.481 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.481 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.739 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.739 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:10.739 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:10.739 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.739 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.303 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.303 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:11.303 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.303 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.303 16:24:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.560 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.560 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:11.560 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.560 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.560 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:11.817 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.817 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:11.817 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:11.817 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.817 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.074 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.074 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:12.074 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.074 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.074 16:24:06 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.331 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.331 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:12.331 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:12.331 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.331 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.588 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3731121 00:11:12.845 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3731121) - No such process 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3731121 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:12.845 rmmod nvme_rdma 00:11:12.845 rmmod nvme_fabrics 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.845 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3731004 ']' 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3731004 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3731004 ']' 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3731004 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3731004 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3731004' 00:11:12.846 killing process with pid 3731004 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3731004 00:11:12.846 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3731004 00:11:13.103 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.104 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:13.104 00:11:13.104 real 0m17.274s 00:11:13.104 user 0m40.809s 00:11:13.104 sys 0m6.315s 00:11:13.104 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.104 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.104 ************************************ 00:11:13.104 END TEST nvmf_connect_stress 00:11:13.104 ************************************ 00:11:13.104 16:24:07 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:13.104 16:24:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.104 16:24:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.104 16:24:07 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:13.104 ************************************ 00:11:13.104 START TEST nvmf_fused_ordering 00:11:13.104 ************************************ 00:11:13.104 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:13.104 * Looking for test storage... 00:11:13.361 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:13.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.361 --rc genhtml_branch_coverage=1 00:11:13.361 --rc genhtml_function_coverage=1 00:11:13.361 --rc genhtml_legend=1 00:11:13.361 --rc geninfo_all_blocks=1 00:11:13.361 --rc geninfo_unexecuted_blocks=1 00:11:13.361 00:11:13.361 ' 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:13.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.361 --rc genhtml_branch_coverage=1 00:11:13.361 --rc genhtml_function_coverage=1 00:11:13.361 --rc genhtml_legend=1 00:11:13.361 --rc geninfo_all_blocks=1 00:11:13.361 --rc geninfo_unexecuted_blocks=1 00:11:13.361 00:11:13.361 ' 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:13.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.361 --rc genhtml_branch_coverage=1 00:11:13.361 --rc genhtml_function_coverage=1 00:11:13.361 --rc genhtml_legend=1 00:11:13.361 --rc geninfo_all_blocks=1 00:11:13.361 --rc geninfo_unexecuted_blocks=1 00:11:13.361 00:11:13.361 ' 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:13.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.361 --rc genhtml_branch_coverage=1 00:11:13.361 --rc genhtml_function_coverage=1 00:11:13.361 --rc genhtml_legend=1 00:11:13.361 --rc geninfo_all_blocks=1 00:11:13.361 --rc geninfo_unexecuted_blocks=1 00:11:13.361 00:11:13.361 ' 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.361 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.362 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.362 16:24:07 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:18.656 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.914 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.914 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:18.914 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:18.914 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:18.915 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:18.915 Found net devices under 0000:18:00.0: mlx_0_0 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:18.915 Found net devices under 0000:18:00.1: mlx_0_1 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:18.915 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:18.915 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:18.915 altname enp24s0f0np0 00:11:18.915 altname ens785f0np0 00:11:18.915 inet 192.168.100.8/24 scope global mlx_0_0 00:11:18.915 valid_lft forever preferred_lft forever 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:18.915 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:18.915 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:18.915 altname enp24s0f1np1 00:11:18.915 altname ens785f1np1 00:11:18.915 inet 192.168.100.9/24 scope global mlx_0_1 00:11:18.915 valid_lft forever preferred_lft forever 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:18.915 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:18.916 192.168.100.9' 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:18.916 192.168.100.9' 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:18.916 192.168.100.9' 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3737022 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3737022 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3737022 ']' 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:18.916 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:18.916 [2024-12-06 16:24:13.638117] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:11:18.916 [2024-12-06 16:24:13.638169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.174 [2024-12-06 16:24:13.696891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.174 [2024-12-06 16:24:13.734664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.174 [2024-12-06 16:24:13.734697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.174 [2024-12-06 16:24:13.734705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.174 [2024-12-06 16:24:13.734712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.174 [2024-12-06 16:24:13.734717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.174 [2024-12-06 16:24:13.735212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.174 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.174 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:19.174 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.174 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.174 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.174 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.174 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:19.174 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.174 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.174 [2024-12-06 16:24:13.888562] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b770c0/0x1b7b5b0) succeed. 00:11:19.174 [2024-12-06 16:24:13.896361] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b78570/0x1bbcc50) succeed. 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.432 [2024-12-06 16:24:13.938483] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.432 NULL1 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.432 16:24:13 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:19.432 [2024-12-06 16:24:13.991817] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:11:19.432 [2024-12-06 16:24:13.991849] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737042 ] 00:11:19.690 Attached to nqn.2016-06.io.spdk:cnode1 00:11:19.690 Namespace ID: 1 size: 1GB 00:11:19.690 fused_ordering(0) 00:11:19.690 fused_ordering(1) 00:11:19.690 fused_ordering(2) 00:11:19.690 fused_ordering(3) 00:11:19.690 fused_ordering(4) 00:11:19.690 fused_ordering(5) 00:11:19.690 fused_ordering(6) 00:11:19.690 fused_ordering(7) 00:11:19.690 fused_ordering(8) 00:11:19.690 fused_ordering(9) 00:11:19.690 fused_ordering(10) 00:11:19.690 fused_ordering(11) 00:11:19.690 fused_ordering(12) 00:11:19.690 fused_ordering(13) 00:11:19.690 fused_ordering(14) 00:11:19.690 fused_ordering(15) 00:11:19.690 fused_ordering(16) 00:11:19.690 fused_ordering(17) 00:11:19.690 fused_ordering(18) 00:11:19.690 fused_ordering(19) 00:11:19.690 fused_ordering(20) 00:11:19.690 fused_ordering(21) 00:11:19.690 fused_ordering(22) 00:11:19.690 fused_ordering(23) 00:11:19.690 fused_ordering(24) 00:11:19.690 fused_ordering(25) 00:11:19.690 fused_ordering(26) 00:11:19.690 fused_ordering(27) 00:11:19.690 fused_ordering(28) 00:11:19.690 fused_ordering(29) 00:11:19.690 fused_ordering(30) 00:11:19.690 fused_ordering(31) 00:11:19.690 fused_ordering(32) 00:11:19.690 fused_ordering(33) 00:11:19.690 fused_ordering(34) 00:11:19.690 fused_ordering(35) 00:11:19.690 fused_ordering(36) 00:11:19.690 fused_ordering(37) 00:11:19.690 fused_ordering(38) 00:11:19.690 fused_ordering(39) 00:11:19.690 fused_ordering(40) 00:11:19.690 fused_ordering(41) 00:11:19.690 fused_ordering(42) 00:11:19.690 fused_ordering(43) 00:11:19.690 fused_ordering(44) 00:11:19.690 fused_ordering(45) 00:11:19.690 fused_ordering(46) 00:11:19.690 fused_ordering(47) 00:11:19.690 fused_ordering(48) 00:11:19.690 fused_ordering(49) 00:11:19.690 fused_ordering(50) 00:11:19.690 fused_ordering(51) 00:11:19.690 fused_ordering(52) 00:11:19.690 fused_ordering(53) 00:11:19.690 fused_ordering(54) 00:11:19.690 fused_ordering(55) 00:11:19.690 fused_ordering(56) 00:11:19.690 fused_ordering(57) 00:11:19.690 fused_ordering(58) 00:11:19.690 fused_ordering(59) 00:11:19.690 fused_ordering(60) 00:11:19.690 fused_ordering(61) 00:11:19.690 fused_ordering(62) 00:11:19.690 fused_ordering(63) 00:11:19.690 fused_ordering(64) 00:11:19.690 fused_ordering(65) 00:11:19.690 fused_ordering(66) 00:11:19.690 fused_ordering(67) 00:11:19.690 fused_ordering(68) 00:11:19.690 fused_ordering(69) 00:11:19.690 fused_ordering(70) 00:11:19.690 fused_ordering(71) 00:11:19.690 fused_ordering(72) 00:11:19.690 fused_ordering(73) 00:11:19.690 fused_ordering(74) 00:11:19.690 fused_ordering(75) 00:11:19.690 fused_ordering(76) 00:11:19.690 fused_ordering(77) 00:11:19.690 fused_ordering(78) 00:11:19.690 fused_ordering(79) 00:11:19.690 fused_ordering(80) 00:11:19.690 fused_ordering(81) 00:11:19.690 fused_ordering(82) 00:11:19.690 fused_ordering(83) 00:11:19.690 fused_ordering(84) 00:11:19.690 fused_ordering(85) 00:11:19.690 fused_ordering(86) 00:11:19.690 fused_ordering(87) 00:11:19.690 fused_ordering(88) 00:11:19.690 fused_ordering(89) 00:11:19.690 fused_ordering(90) 00:11:19.690 fused_ordering(91) 00:11:19.690 fused_ordering(92) 00:11:19.690 fused_ordering(93) 00:11:19.690 fused_ordering(94) 00:11:19.690 fused_ordering(95) 00:11:19.690 fused_ordering(96) 00:11:19.690 fused_ordering(97) 00:11:19.690 fused_ordering(98) 00:11:19.690 fused_ordering(99) 00:11:19.690 fused_ordering(100) 00:11:19.690 fused_ordering(101) 00:11:19.690 fused_ordering(102) 00:11:19.690 fused_ordering(103) 00:11:19.690 fused_ordering(104) 00:11:19.690 fused_ordering(105) 00:11:19.690 fused_ordering(106) 00:11:19.690 fused_ordering(107) 00:11:19.690 fused_ordering(108) 00:11:19.690 fused_ordering(109) 00:11:19.690 fused_ordering(110) 00:11:19.690 fused_ordering(111) 00:11:19.690 fused_ordering(112) 00:11:19.690 fused_ordering(113) 00:11:19.690 fused_ordering(114) 00:11:19.690 fused_ordering(115) 00:11:19.690 fused_ordering(116) 00:11:19.690 fused_ordering(117) 00:11:19.690 fused_ordering(118) 00:11:19.690 fused_ordering(119) 00:11:19.690 fused_ordering(120) 00:11:19.690 fused_ordering(121) 00:11:19.690 fused_ordering(122) 00:11:19.690 fused_ordering(123) 00:11:19.690 fused_ordering(124) 00:11:19.690 fused_ordering(125) 00:11:19.690 fused_ordering(126) 00:11:19.690 fused_ordering(127) 00:11:19.690 fused_ordering(128) 00:11:19.690 fused_ordering(129) 00:11:19.690 fused_ordering(130) 00:11:19.690 fused_ordering(131) 00:11:19.690 fused_ordering(132) 00:11:19.690 fused_ordering(133) 00:11:19.690 fused_ordering(134) 00:11:19.690 fused_ordering(135) 00:11:19.690 fused_ordering(136) 00:11:19.690 fused_ordering(137) 00:11:19.690 fused_ordering(138) 00:11:19.690 fused_ordering(139) 00:11:19.690 fused_ordering(140) 00:11:19.690 fused_ordering(141) 00:11:19.690 fused_ordering(142) 00:11:19.690 fused_ordering(143) 00:11:19.690 fused_ordering(144) 00:11:19.690 fused_ordering(145) 00:11:19.690 fused_ordering(146) 00:11:19.690 fused_ordering(147) 00:11:19.690 fused_ordering(148) 00:11:19.690 fused_ordering(149) 00:11:19.690 fused_ordering(150) 00:11:19.690 fused_ordering(151) 00:11:19.690 fused_ordering(152) 00:11:19.690 fused_ordering(153) 00:11:19.690 fused_ordering(154) 00:11:19.690 fused_ordering(155) 00:11:19.690 fused_ordering(156) 00:11:19.690 fused_ordering(157) 00:11:19.690 fused_ordering(158) 00:11:19.690 fused_ordering(159) 00:11:19.690 fused_ordering(160) 00:11:19.690 fused_ordering(161) 00:11:19.690 fused_ordering(162) 00:11:19.690 fused_ordering(163) 00:11:19.690 fused_ordering(164) 00:11:19.690 fused_ordering(165) 00:11:19.690 fused_ordering(166) 00:11:19.690 fused_ordering(167) 00:11:19.690 fused_ordering(168) 00:11:19.690 fused_ordering(169) 00:11:19.690 fused_ordering(170) 00:11:19.690 fused_ordering(171) 00:11:19.690 fused_ordering(172) 00:11:19.690 fused_ordering(173) 00:11:19.690 fused_ordering(174) 00:11:19.691 fused_ordering(175) 00:11:19.691 fused_ordering(176) 00:11:19.691 fused_ordering(177) 00:11:19.691 fused_ordering(178) 00:11:19.691 fused_ordering(179) 00:11:19.691 fused_ordering(180) 00:11:19.691 fused_ordering(181) 00:11:19.691 fused_ordering(182) 00:11:19.691 fused_ordering(183) 00:11:19.691 fused_ordering(184) 00:11:19.691 fused_ordering(185) 00:11:19.691 fused_ordering(186) 00:11:19.691 fused_ordering(187) 00:11:19.691 fused_ordering(188) 00:11:19.691 fused_ordering(189) 00:11:19.691 fused_ordering(190) 00:11:19.691 fused_ordering(191) 00:11:19.691 fused_ordering(192) 00:11:19.691 fused_ordering(193) 00:11:19.691 fused_ordering(194) 00:11:19.691 fused_ordering(195) 00:11:19.691 fused_ordering(196) 00:11:19.691 fused_ordering(197) 00:11:19.691 fused_ordering(198) 00:11:19.691 fused_ordering(199) 00:11:19.691 fused_ordering(200) 00:11:19.691 fused_ordering(201) 00:11:19.691 fused_ordering(202) 00:11:19.691 fused_ordering(203) 00:11:19.691 fused_ordering(204) 00:11:19.691 fused_ordering(205) 00:11:19.691 fused_ordering(206) 00:11:19.691 fused_ordering(207) 00:11:19.691 fused_ordering(208) 00:11:19.691 fused_ordering(209) 00:11:19.691 fused_ordering(210) 00:11:19.691 fused_ordering(211) 00:11:19.691 fused_ordering(212) 00:11:19.691 fused_ordering(213) 00:11:19.691 fused_ordering(214) 00:11:19.691 fused_ordering(215) 00:11:19.691 fused_ordering(216) 00:11:19.691 fused_ordering(217) 00:11:19.691 fused_ordering(218) 00:11:19.691 fused_ordering(219) 00:11:19.691 fused_ordering(220) 00:11:19.691 fused_ordering(221) 00:11:19.691 fused_ordering(222) 00:11:19.691 fused_ordering(223) 00:11:19.691 fused_ordering(224) 00:11:19.691 fused_ordering(225) 00:11:19.691 fused_ordering(226) 00:11:19.691 fused_ordering(227) 00:11:19.691 fused_ordering(228) 00:11:19.691 fused_ordering(229) 00:11:19.691 fused_ordering(230) 00:11:19.691 fused_ordering(231) 00:11:19.691 fused_ordering(232) 00:11:19.691 fused_ordering(233) 00:11:19.691 fused_ordering(234) 00:11:19.691 fused_ordering(235) 00:11:19.691 fused_ordering(236) 00:11:19.691 fused_ordering(237) 00:11:19.691 fused_ordering(238) 00:11:19.691 fused_ordering(239) 00:11:19.691 fused_ordering(240) 00:11:19.691 fused_ordering(241) 00:11:19.691 fused_ordering(242) 00:11:19.691 fused_ordering(243) 00:11:19.691 fused_ordering(244) 00:11:19.691 fused_ordering(245) 00:11:19.691 fused_ordering(246) 00:11:19.691 fused_ordering(247) 00:11:19.691 fused_ordering(248) 00:11:19.691 fused_ordering(249) 00:11:19.691 fused_ordering(250) 00:11:19.691 fused_ordering(251) 00:11:19.691 fused_ordering(252) 00:11:19.691 fused_ordering(253) 00:11:19.691 fused_ordering(254) 00:11:19.691 fused_ordering(255) 00:11:19.691 fused_ordering(256) 00:11:19.691 fused_ordering(257) 00:11:19.691 fused_ordering(258) 00:11:19.691 fused_ordering(259) 00:11:19.691 fused_ordering(260) 00:11:19.691 fused_ordering(261) 00:11:19.691 fused_ordering(262) 00:11:19.691 fused_ordering(263) 00:11:19.691 fused_ordering(264) 00:11:19.691 fused_ordering(265) 00:11:19.691 fused_ordering(266) 00:11:19.691 fused_ordering(267) 00:11:19.691 fused_ordering(268) 00:11:19.691 fused_ordering(269) 00:11:19.691 fused_ordering(270) 00:11:19.691 fused_ordering(271) 00:11:19.691 fused_ordering(272) 00:11:19.691 fused_ordering(273) 00:11:19.691 fused_ordering(274) 00:11:19.691 fused_ordering(275) 00:11:19.691 fused_ordering(276) 00:11:19.691 fused_ordering(277) 00:11:19.691 fused_ordering(278) 00:11:19.691 fused_ordering(279) 00:11:19.691 fused_ordering(280) 00:11:19.691 fused_ordering(281) 00:11:19.691 fused_ordering(282) 00:11:19.691 fused_ordering(283) 00:11:19.691 fused_ordering(284) 00:11:19.691 fused_ordering(285) 00:11:19.691 fused_ordering(286) 00:11:19.691 fused_ordering(287) 00:11:19.691 fused_ordering(288) 00:11:19.691 fused_ordering(289) 00:11:19.691 fused_ordering(290) 00:11:19.691 fused_ordering(291) 00:11:19.691 fused_ordering(292) 00:11:19.691 fused_ordering(293) 00:11:19.691 fused_ordering(294) 00:11:19.691 fused_ordering(295) 00:11:19.691 fused_ordering(296) 00:11:19.691 fused_ordering(297) 00:11:19.691 fused_ordering(298) 00:11:19.691 fused_ordering(299) 00:11:19.691 fused_ordering(300) 00:11:19.691 fused_ordering(301) 00:11:19.691 fused_ordering(302) 00:11:19.691 fused_ordering(303) 00:11:19.691 fused_ordering(304) 00:11:19.691 fused_ordering(305) 00:11:19.691 fused_ordering(306) 00:11:19.691 fused_ordering(307) 00:11:19.691 fused_ordering(308) 00:11:19.691 fused_ordering(309) 00:11:19.691 fused_ordering(310) 00:11:19.691 fused_ordering(311) 00:11:19.691 fused_ordering(312) 00:11:19.691 fused_ordering(313) 00:11:19.691 fused_ordering(314) 00:11:19.691 fused_ordering(315) 00:11:19.691 fused_ordering(316) 00:11:19.691 fused_ordering(317) 00:11:19.691 fused_ordering(318) 00:11:19.691 fused_ordering(319) 00:11:19.691 fused_ordering(320) 00:11:19.691 fused_ordering(321) 00:11:19.691 fused_ordering(322) 00:11:19.691 fused_ordering(323) 00:11:19.691 fused_ordering(324) 00:11:19.691 fused_ordering(325) 00:11:19.691 fused_ordering(326) 00:11:19.691 fused_ordering(327) 00:11:19.691 fused_ordering(328) 00:11:19.691 fused_ordering(329) 00:11:19.691 fused_ordering(330) 00:11:19.691 fused_ordering(331) 00:11:19.691 fused_ordering(332) 00:11:19.691 fused_ordering(333) 00:11:19.691 fused_ordering(334) 00:11:19.691 fused_ordering(335) 00:11:19.691 fused_ordering(336) 00:11:19.691 fused_ordering(337) 00:11:19.691 fused_ordering(338) 00:11:19.691 fused_ordering(339) 00:11:19.691 fused_ordering(340) 00:11:19.691 fused_ordering(341) 00:11:19.691 fused_ordering(342) 00:11:19.691 fused_ordering(343) 00:11:19.691 fused_ordering(344) 00:11:19.691 fused_ordering(345) 00:11:19.691 fused_ordering(346) 00:11:19.691 fused_ordering(347) 00:11:19.691 fused_ordering(348) 00:11:19.691 fused_ordering(349) 00:11:19.691 fused_ordering(350) 00:11:19.691 fused_ordering(351) 00:11:19.691 fused_ordering(352) 00:11:19.691 fused_ordering(353) 00:11:19.691 fused_ordering(354) 00:11:19.691 fused_ordering(355) 00:11:19.692 fused_ordering(356) 00:11:19.692 fused_ordering(357) 00:11:19.692 fused_ordering(358) 00:11:19.692 fused_ordering(359) 00:11:19.692 fused_ordering(360) 00:11:19.692 fused_ordering(361) 00:11:19.692 fused_ordering(362) 00:11:19.692 fused_ordering(363) 00:11:19.692 fused_ordering(364) 00:11:19.692 fused_ordering(365) 00:11:19.692 fused_ordering(366) 00:11:19.692 fused_ordering(367) 00:11:19.692 fused_ordering(368) 00:11:19.692 fused_ordering(369) 00:11:19.692 fused_ordering(370) 00:11:19.692 fused_ordering(371) 00:11:19.692 fused_ordering(372) 00:11:19.692 fused_ordering(373) 00:11:19.692 fused_ordering(374) 00:11:19.692 fused_ordering(375) 00:11:19.692 fused_ordering(376) 00:11:19.692 fused_ordering(377) 00:11:19.692 fused_ordering(378) 00:11:19.692 fused_ordering(379) 00:11:19.692 fused_ordering(380) 00:11:19.692 fused_ordering(381) 00:11:19.692 fused_ordering(382) 00:11:19.692 fused_ordering(383) 00:11:19.692 fused_ordering(384) 00:11:19.692 fused_ordering(385) 00:11:19.692 fused_ordering(386) 00:11:19.692 fused_ordering(387) 00:11:19.692 fused_ordering(388) 00:11:19.692 fused_ordering(389) 00:11:19.692 fused_ordering(390) 00:11:19.692 fused_ordering(391) 00:11:19.692 fused_ordering(392) 00:11:19.692 fused_ordering(393) 00:11:19.692 fused_ordering(394) 00:11:19.692 fused_ordering(395) 00:11:19.692 fused_ordering(396) 00:11:19.692 fused_ordering(397) 00:11:19.692 fused_ordering(398) 00:11:19.692 fused_ordering(399) 00:11:19.692 fused_ordering(400) 00:11:19.692 fused_ordering(401) 00:11:19.692 fused_ordering(402) 00:11:19.692 fused_ordering(403) 00:11:19.692 fused_ordering(404) 00:11:19.692 fused_ordering(405) 00:11:19.692 fused_ordering(406) 00:11:19.692 fused_ordering(407) 00:11:19.692 fused_ordering(408) 00:11:19.692 fused_ordering(409) 00:11:19.692 fused_ordering(410) 00:11:19.692 fused_ordering(411) 00:11:19.692 fused_ordering(412) 00:11:19.692 fused_ordering(413) 00:11:19.692 fused_ordering(414) 00:11:19.692 fused_ordering(415) 00:11:19.692 fused_ordering(416) 00:11:19.692 fused_ordering(417) 00:11:19.692 fused_ordering(418) 00:11:19.692 fused_ordering(419) 00:11:19.692 fused_ordering(420) 00:11:19.692 fused_ordering(421) 00:11:19.692 fused_ordering(422) 00:11:19.692 fused_ordering(423) 00:11:19.692 fused_ordering(424) 00:11:19.692 fused_ordering(425) 00:11:19.692 fused_ordering(426) 00:11:19.692 fused_ordering(427) 00:11:19.692 fused_ordering(428) 00:11:19.692 fused_ordering(429) 00:11:19.692 fused_ordering(430) 00:11:19.692 fused_ordering(431) 00:11:19.692 fused_ordering(432) 00:11:19.692 fused_ordering(433) 00:11:19.692 fused_ordering(434) 00:11:19.692 fused_ordering(435) 00:11:19.692 fused_ordering(436) 00:11:19.692 fused_ordering(437) 00:11:19.692 fused_ordering(438) 00:11:19.692 fused_ordering(439) 00:11:19.692 fused_ordering(440) 00:11:19.692 fused_ordering(441) 00:11:19.692 fused_ordering(442) 00:11:19.692 fused_ordering(443) 00:11:19.692 fused_ordering(444) 00:11:19.692 fused_ordering(445) 00:11:19.692 fused_ordering(446) 00:11:19.692 fused_ordering(447) 00:11:19.692 fused_ordering(448) 00:11:19.692 fused_ordering(449) 00:11:19.692 fused_ordering(450) 00:11:19.692 fused_ordering(451) 00:11:19.692 fused_ordering(452) 00:11:19.692 fused_ordering(453) 00:11:19.692 fused_ordering(454) 00:11:19.692 fused_ordering(455) 00:11:19.692 fused_ordering(456) 00:11:19.692 fused_ordering(457) 00:11:19.692 fused_ordering(458) 00:11:19.692 fused_ordering(459) 00:11:19.692 fused_ordering(460) 00:11:19.692 fused_ordering(461) 00:11:19.692 fused_ordering(462) 00:11:19.692 fused_ordering(463) 00:11:19.692 fused_ordering(464) 00:11:19.692 fused_ordering(465) 00:11:19.692 fused_ordering(466) 00:11:19.692 fused_ordering(467) 00:11:19.692 fused_ordering(468) 00:11:19.692 fused_ordering(469) 00:11:19.692 fused_ordering(470) 00:11:19.692 fused_ordering(471) 00:11:19.692 fused_ordering(472) 00:11:19.692 fused_ordering(473) 00:11:19.692 fused_ordering(474) 00:11:19.692 fused_ordering(475) 00:11:19.692 fused_ordering(476) 00:11:19.692 fused_ordering(477) 00:11:19.692 fused_ordering(478) 00:11:19.692 fused_ordering(479) 00:11:19.692 fused_ordering(480) 00:11:19.692 fused_ordering(481) 00:11:19.692 fused_ordering(482) 00:11:19.692 fused_ordering(483) 00:11:19.692 fused_ordering(484) 00:11:19.692 fused_ordering(485) 00:11:19.692 fused_ordering(486) 00:11:19.692 fused_ordering(487) 00:11:19.692 fused_ordering(488) 00:11:19.692 fused_ordering(489) 00:11:19.692 fused_ordering(490) 00:11:19.692 fused_ordering(491) 00:11:19.692 fused_ordering(492) 00:11:19.692 fused_ordering(493) 00:11:19.692 fused_ordering(494) 00:11:19.692 fused_ordering(495) 00:11:19.692 fused_ordering(496) 00:11:19.692 fused_ordering(497) 00:11:19.692 fused_ordering(498) 00:11:19.692 fused_ordering(499) 00:11:19.692 fused_ordering(500) 00:11:19.692 fused_ordering(501) 00:11:19.692 fused_ordering(502) 00:11:19.692 fused_ordering(503) 00:11:19.692 fused_ordering(504) 00:11:19.692 fused_ordering(505) 00:11:19.692 fused_ordering(506) 00:11:19.692 fused_ordering(507) 00:11:19.692 fused_ordering(508) 00:11:19.692 fused_ordering(509) 00:11:19.692 fused_ordering(510) 00:11:19.692 fused_ordering(511) 00:11:19.692 fused_ordering(512) 00:11:19.692 fused_ordering(513) 00:11:19.692 fused_ordering(514) 00:11:19.692 fused_ordering(515) 00:11:19.692 fused_ordering(516) 00:11:19.692 fused_ordering(517) 00:11:19.692 fused_ordering(518) 00:11:19.692 fused_ordering(519) 00:11:19.692 fused_ordering(520) 00:11:19.692 fused_ordering(521) 00:11:19.692 fused_ordering(522) 00:11:19.692 fused_ordering(523) 00:11:19.692 fused_ordering(524) 00:11:19.692 fused_ordering(525) 00:11:19.692 fused_ordering(526) 00:11:19.692 fused_ordering(527) 00:11:19.692 fused_ordering(528) 00:11:19.692 fused_ordering(529) 00:11:19.692 fused_ordering(530) 00:11:19.692 fused_ordering(531) 00:11:19.692 fused_ordering(532) 00:11:19.692 fused_ordering(533) 00:11:19.693 fused_ordering(534) 00:11:19.693 fused_ordering(535) 00:11:19.693 fused_ordering(536) 00:11:19.693 fused_ordering(537) 00:11:19.693 fused_ordering(538) 00:11:19.693 fused_ordering(539) 00:11:19.693 fused_ordering(540) 00:11:19.693 fused_ordering(541) 00:11:19.693 fused_ordering(542) 00:11:19.693 fused_ordering(543) 00:11:19.693 fused_ordering(544) 00:11:19.693 fused_ordering(545) 00:11:19.693 fused_ordering(546) 00:11:19.693 fused_ordering(547) 00:11:19.693 fused_ordering(548) 00:11:19.693 fused_ordering(549) 00:11:19.693 fused_ordering(550) 00:11:19.693 fused_ordering(551) 00:11:19.693 fused_ordering(552) 00:11:19.693 fused_ordering(553) 00:11:19.693 fused_ordering(554) 00:11:19.693 fused_ordering(555) 00:11:19.693 fused_ordering(556) 00:11:19.693 fused_ordering(557) 00:11:19.693 fused_ordering(558) 00:11:19.693 fused_ordering(559) 00:11:19.693 fused_ordering(560) 00:11:19.693 fused_ordering(561) 00:11:19.693 fused_ordering(562) 00:11:19.693 fused_ordering(563) 00:11:19.693 fused_ordering(564) 00:11:19.693 fused_ordering(565) 00:11:19.693 fused_ordering(566) 00:11:19.693 fused_ordering(567) 00:11:19.693 fused_ordering(568) 00:11:19.693 fused_ordering(569) 00:11:19.693 fused_ordering(570) 00:11:19.693 fused_ordering(571) 00:11:19.693 fused_ordering(572) 00:11:19.693 fused_ordering(573) 00:11:19.693 fused_ordering(574) 00:11:19.693 fused_ordering(575) 00:11:19.693 fused_ordering(576) 00:11:19.693 fused_ordering(577) 00:11:19.693 fused_ordering(578) 00:11:19.693 fused_ordering(579) 00:11:19.693 fused_ordering(580) 00:11:19.693 fused_ordering(581) 00:11:19.693 fused_ordering(582) 00:11:19.693 fused_ordering(583) 00:11:19.693 fused_ordering(584) 00:11:19.693 fused_ordering(585) 00:11:19.693 fused_ordering(586) 00:11:19.693 fused_ordering(587) 00:11:19.693 fused_ordering(588) 00:11:19.693 fused_ordering(589) 00:11:19.693 fused_ordering(590) 00:11:19.693 fused_ordering(591) 00:11:19.693 fused_ordering(592) 00:11:19.693 fused_ordering(593) 00:11:19.693 fused_ordering(594) 00:11:19.693 fused_ordering(595) 00:11:19.693 fused_ordering(596) 00:11:19.693 fused_ordering(597) 00:11:19.693 fused_ordering(598) 00:11:19.693 fused_ordering(599) 00:11:19.693 fused_ordering(600) 00:11:19.693 fused_ordering(601) 00:11:19.693 fused_ordering(602) 00:11:19.693 fused_ordering(603) 00:11:19.693 fused_ordering(604) 00:11:19.693 fused_ordering(605) 00:11:19.693 fused_ordering(606) 00:11:19.693 fused_ordering(607) 00:11:19.693 fused_ordering(608) 00:11:19.693 fused_ordering(609) 00:11:19.693 fused_ordering(610) 00:11:19.693 fused_ordering(611) 00:11:19.693 fused_ordering(612) 00:11:19.693 fused_ordering(613) 00:11:19.693 fused_ordering(614) 00:11:19.693 fused_ordering(615) 00:11:19.950 fused_ordering(616) 00:11:19.950 fused_ordering(617) 00:11:19.950 fused_ordering(618) 00:11:19.950 fused_ordering(619) 00:11:19.950 fused_ordering(620) 00:11:19.950 fused_ordering(621) 00:11:19.950 fused_ordering(622) 00:11:19.950 fused_ordering(623) 00:11:19.950 fused_ordering(624) 00:11:19.950 fused_ordering(625) 00:11:19.950 fused_ordering(626) 00:11:19.950 fused_ordering(627) 00:11:19.950 fused_ordering(628) 00:11:19.950 fused_ordering(629) 00:11:19.951 fused_ordering(630) 00:11:19.951 fused_ordering(631) 00:11:19.951 fused_ordering(632) 00:11:19.951 fused_ordering(633) 00:11:19.951 fused_ordering(634) 00:11:19.951 fused_ordering(635) 00:11:19.951 fused_ordering(636) 00:11:19.951 fused_ordering(637) 00:11:19.951 fused_ordering(638) 00:11:19.951 fused_ordering(639) 00:11:19.951 fused_ordering(640) 00:11:19.951 fused_ordering(641) 00:11:19.951 fused_ordering(642) 00:11:19.951 fused_ordering(643) 00:11:19.951 fused_ordering(644) 00:11:19.951 fused_ordering(645) 00:11:19.951 fused_ordering(646) 00:11:19.951 fused_ordering(647) 00:11:19.951 fused_ordering(648) 00:11:19.951 fused_ordering(649) 00:11:19.951 fused_ordering(650) 00:11:19.951 fused_ordering(651) 00:11:19.951 fused_ordering(652) 00:11:19.951 fused_ordering(653) 00:11:19.951 fused_ordering(654) 00:11:19.951 fused_ordering(655) 00:11:19.951 fused_ordering(656) 00:11:19.951 fused_ordering(657) 00:11:19.951 fused_ordering(658) 00:11:19.951 fused_ordering(659) 00:11:19.951 fused_ordering(660) 00:11:19.951 fused_ordering(661) 00:11:19.951 fused_ordering(662) 00:11:19.951 fused_ordering(663) 00:11:19.951 fused_ordering(664) 00:11:19.951 fused_ordering(665) 00:11:19.951 fused_ordering(666) 00:11:19.951 fused_ordering(667) 00:11:19.951 fused_ordering(668) 00:11:19.951 fused_ordering(669) 00:11:19.951 fused_ordering(670) 00:11:19.951 fused_ordering(671) 00:11:19.951 fused_ordering(672) 00:11:19.951 fused_ordering(673) 00:11:19.951 fused_ordering(674) 00:11:19.951 fused_ordering(675) 00:11:19.951 fused_ordering(676) 00:11:19.951 fused_ordering(677) 00:11:19.951 fused_ordering(678) 00:11:19.951 fused_ordering(679) 00:11:19.951 fused_ordering(680) 00:11:19.951 fused_ordering(681) 00:11:19.951 fused_ordering(682) 00:11:19.951 fused_ordering(683) 00:11:19.951 fused_ordering(684) 00:11:19.951 fused_ordering(685) 00:11:19.951 fused_ordering(686) 00:11:19.951 fused_ordering(687) 00:11:19.951 fused_ordering(688) 00:11:19.951 fused_ordering(689) 00:11:19.951 fused_ordering(690) 00:11:19.951 fused_ordering(691) 00:11:19.951 fused_ordering(692) 00:11:19.951 fused_ordering(693) 00:11:19.951 fused_ordering(694) 00:11:19.951 fused_ordering(695) 00:11:19.951 fused_ordering(696) 00:11:19.951 fused_ordering(697) 00:11:19.951 fused_ordering(698) 00:11:19.951 fused_ordering(699) 00:11:19.951 fused_ordering(700) 00:11:19.951 fused_ordering(701) 00:11:19.951 fused_ordering(702) 00:11:19.951 fused_ordering(703) 00:11:19.951 fused_ordering(704) 00:11:19.951 fused_ordering(705) 00:11:19.951 fused_ordering(706) 00:11:19.951 fused_ordering(707) 00:11:19.951 fused_ordering(708) 00:11:19.951 fused_ordering(709) 00:11:19.951 fused_ordering(710) 00:11:19.951 fused_ordering(711) 00:11:19.951 fused_ordering(712) 00:11:19.951 fused_ordering(713) 00:11:19.951 fused_ordering(714) 00:11:19.951 fused_ordering(715) 00:11:19.951 fused_ordering(716) 00:11:19.951 fused_ordering(717) 00:11:19.951 fused_ordering(718) 00:11:19.951 fused_ordering(719) 00:11:19.951 fused_ordering(720) 00:11:19.951 fused_ordering(721) 00:11:19.951 fused_ordering(722) 00:11:19.951 fused_ordering(723) 00:11:19.951 fused_ordering(724) 00:11:19.951 fused_ordering(725) 00:11:19.951 fused_ordering(726) 00:11:19.951 fused_ordering(727) 00:11:19.951 fused_ordering(728) 00:11:19.951 fused_ordering(729) 00:11:19.951 fused_ordering(730) 00:11:19.951 fused_ordering(731) 00:11:19.951 fused_ordering(732) 00:11:19.951 fused_ordering(733) 00:11:19.951 fused_ordering(734) 00:11:19.951 fused_ordering(735) 00:11:19.951 fused_ordering(736) 00:11:19.951 fused_ordering(737) 00:11:19.951 fused_ordering(738) 00:11:19.951 fused_ordering(739) 00:11:19.951 fused_ordering(740) 00:11:19.951 fused_ordering(741) 00:11:19.951 fused_ordering(742) 00:11:19.951 fused_ordering(743) 00:11:19.951 fused_ordering(744) 00:11:19.951 fused_ordering(745) 00:11:19.951 fused_ordering(746) 00:11:19.951 fused_ordering(747) 00:11:19.951 fused_ordering(748) 00:11:19.951 fused_ordering(749) 00:11:19.951 fused_ordering(750) 00:11:19.951 fused_ordering(751) 00:11:19.951 fused_ordering(752) 00:11:19.951 fused_ordering(753) 00:11:19.951 fused_ordering(754) 00:11:19.951 fused_ordering(755) 00:11:19.951 fused_ordering(756) 00:11:19.951 fused_ordering(757) 00:11:19.951 fused_ordering(758) 00:11:19.951 fused_ordering(759) 00:11:19.951 fused_ordering(760) 00:11:19.951 fused_ordering(761) 00:11:19.951 fused_ordering(762) 00:11:19.951 fused_ordering(763) 00:11:19.951 fused_ordering(764) 00:11:19.951 fused_ordering(765) 00:11:19.951 fused_ordering(766) 00:11:19.951 fused_ordering(767) 00:11:19.951 fused_ordering(768) 00:11:19.951 fused_ordering(769) 00:11:19.951 fused_ordering(770) 00:11:19.951 fused_ordering(771) 00:11:19.951 fused_ordering(772) 00:11:19.951 fused_ordering(773) 00:11:19.951 fused_ordering(774) 00:11:19.951 fused_ordering(775) 00:11:19.951 fused_ordering(776) 00:11:19.951 fused_ordering(777) 00:11:19.951 fused_ordering(778) 00:11:19.951 fused_ordering(779) 00:11:19.951 fused_ordering(780) 00:11:19.951 fused_ordering(781) 00:11:19.951 fused_ordering(782) 00:11:19.951 fused_ordering(783) 00:11:19.951 fused_ordering(784) 00:11:19.951 fused_ordering(785) 00:11:19.951 fused_ordering(786) 00:11:19.951 fused_ordering(787) 00:11:19.951 fused_ordering(788) 00:11:19.951 fused_ordering(789) 00:11:19.951 fused_ordering(790) 00:11:19.951 fused_ordering(791) 00:11:19.951 fused_ordering(792) 00:11:19.951 fused_ordering(793) 00:11:19.951 fused_ordering(794) 00:11:19.951 fused_ordering(795) 00:11:19.951 fused_ordering(796) 00:11:19.951 fused_ordering(797) 00:11:19.951 fused_ordering(798) 00:11:19.951 fused_ordering(799) 00:11:19.951 fused_ordering(800) 00:11:19.951 fused_ordering(801) 00:11:19.951 fused_ordering(802) 00:11:19.951 fused_ordering(803) 00:11:19.951 fused_ordering(804) 00:11:19.951 fused_ordering(805) 00:11:19.951 fused_ordering(806) 00:11:19.951 fused_ordering(807) 00:11:19.951 fused_ordering(808) 00:11:19.951 fused_ordering(809) 00:11:19.951 fused_ordering(810) 00:11:19.951 fused_ordering(811) 00:11:19.951 fused_ordering(812) 00:11:19.951 fused_ordering(813) 00:11:19.951 fused_ordering(814) 00:11:19.951 fused_ordering(815) 00:11:19.951 fused_ordering(816) 00:11:19.951 fused_ordering(817) 00:11:19.951 fused_ordering(818) 00:11:19.951 fused_ordering(819) 00:11:19.951 fused_ordering(820) 00:11:19.951 fused_ordering(821) 00:11:19.951 fused_ordering(822) 00:11:19.951 fused_ordering(823) 00:11:19.951 fused_ordering(824) 00:11:19.951 fused_ordering(825) 00:11:19.951 fused_ordering(826) 00:11:19.951 fused_ordering(827) 00:11:19.951 fused_ordering(828) 00:11:19.951 fused_ordering(829) 00:11:19.951 fused_ordering(830) 00:11:19.951 fused_ordering(831) 00:11:19.951 fused_ordering(832) 00:11:19.951 fused_ordering(833) 00:11:19.951 fused_ordering(834) 00:11:19.951 fused_ordering(835) 00:11:19.951 fused_ordering(836) 00:11:19.951 fused_ordering(837) 00:11:19.951 fused_ordering(838) 00:11:19.951 fused_ordering(839) 00:11:19.951 fused_ordering(840) 00:11:19.951 fused_ordering(841) 00:11:19.951 fused_ordering(842) 00:11:19.951 fused_ordering(843) 00:11:19.951 fused_ordering(844) 00:11:19.951 fused_ordering(845) 00:11:19.951 fused_ordering(846) 00:11:19.951 fused_ordering(847) 00:11:19.951 fused_ordering(848) 00:11:19.951 fused_ordering(849) 00:11:19.951 fused_ordering(850) 00:11:19.951 fused_ordering(851) 00:11:19.951 fused_ordering(852) 00:11:19.951 fused_ordering(853) 00:11:19.951 fused_ordering(854) 00:11:19.951 fused_ordering(855) 00:11:19.951 fused_ordering(856) 00:11:19.951 fused_ordering(857) 00:11:19.951 fused_ordering(858) 00:11:19.951 fused_ordering(859) 00:11:19.951 fused_ordering(860) 00:11:19.951 fused_ordering(861) 00:11:19.951 fused_ordering(862) 00:11:19.951 fused_ordering(863) 00:11:19.951 fused_ordering(864) 00:11:19.951 fused_ordering(865) 00:11:19.951 fused_ordering(866) 00:11:19.951 fused_ordering(867) 00:11:19.951 fused_ordering(868) 00:11:19.951 fused_ordering(869) 00:11:19.951 fused_ordering(870) 00:11:19.951 fused_ordering(871) 00:11:19.951 fused_ordering(872) 00:11:19.951 fused_ordering(873) 00:11:19.951 fused_ordering(874) 00:11:19.951 fused_ordering(875) 00:11:19.951 fused_ordering(876) 00:11:19.951 fused_ordering(877) 00:11:19.951 fused_ordering(878) 00:11:19.951 fused_ordering(879) 00:11:19.951 fused_ordering(880) 00:11:19.951 fused_ordering(881) 00:11:19.951 fused_ordering(882) 00:11:19.951 fused_ordering(883) 00:11:19.951 fused_ordering(884) 00:11:19.951 fused_ordering(885) 00:11:19.951 fused_ordering(886) 00:11:19.951 fused_ordering(887) 00:11:19.951 fused_ordering(888) 00:11:19.951 fused_ordering(889) 00:11:19.951 fused_ordering(890) 00:11:19.951 fused_ordering(891) 00:11:19.951 fused_ordering(892) 00:11:19.951 fused_ordering(893) 00:11:19.951 fused_ordering(894) 00:11:19.951 fused_ordering(895) 00:11:19.951 fused_ordering(896) 00:11:19.951 fused_ordering(897) 00:11:19.951 fused_ordering(898) 00:11:19.951 fused_ordering(899) 00:11:19.951 fused_ordering(900) 00:11:19.951 fused_ordering(901) 00:11:19.951 fused_ordering(902) 00:11:19.951 fused_ordering(903) 00:11:19.952 fused_ordering(904) 00:11:19.952 fused_ordering(905) 00:11:19.952 fused_ordering(906) 00:11:19.952 fused_ordering(907) 00:11:19.952 fused_ordering(908) 00:11:19.952 fused_ordering(909) 00:11:19.952 fused_ordering(910) 00:11:19.952 fused_ordering(911) 00:11:19.952 fused_ordering(912) 00:11:19.952 fused_ordering(913) 00:11:19.952 fused_ordering(914) 00:11:19.952 fused_ordering(915) 00:11:19.952 fused_ordering(916) 00:11:19.952 fused_ordering(917) 00:11:19.952 fused_ordering(918) 00:11:19.952 fused_ordering(919) 00:11:19.952 fused_ordering(920) 00:11:19.952 fused_ordering(921) 00:11:19.952 fused_ordering(922) 00:11:19.952 fused_ordering(923) 00:11:19.952 fused_ordering(924) 00:11:19.952 fused_ordering(925) 00:11:19.952 fused_ordering(926) 00:11:19.952 fused_ordering(927) 00:11:19.952 fused_ordering(928) 00:11:19.952 fused_ordering(929) 00:11:19.952 fused_ordering(930) 00:11:19.952 fused_ordering(931) 00:11:19.952 fused_ordering(932) 00:11:19.952 fused_ordering(933) 00:11:19.952 fused_ordering(934) 00:11:19.952 fused_ordering(935) 00:11:19.952 fused_ordering(936) 00:11:19.952 fused_ordering(937) 00:11:19.952 fused_ordering(938) 00:11:19.952 fused_ordering(939) 00:11:19.952 fused_ordering(940) 00:11:19.952 fused_ordering(941) 00:11:19.952 fused_ordering(942) 00:11:19.952 fused_ordering(943) 00:11:19.952 fused_ordering(944) 00:11:19.952 fused_ordering(945) 00:11:19.952 fused_ordering(946) 00:11:19.952 fused_ordering(947) 00:11:19.952 fused_ordering(948) 00:11:19.952 fused_ordering(949) 00:11:19.952 fused_ordering(950) 00:11:19.952 fused_ordering(951) 00:11:19.952 fused_ordering(952) 00:11:19.952 fused_ordering(953) 00:11:19.952 fused_ordering(954) 00:11:19.952 fused_ordering(955) 00:11:19.952 fused_ordering(956) 00:11:19.952 fused_ordering(957) 00:11:19.952 fused_ordering(958) 00:11:19.952 fused_ordering(959) 00:11:19.952 fused_ordering(960) 00:11:19.952 fused_ordering(961) 00:11:19.952 fused_ordering(962) 00:11:19.952 fused_ordering(963) 00:11:19.952 fused_ordering(964) 00:11:19.952 fused_ordering(965) 00:11:19.952 fused_ordering(966) 00:11:19.952 fused_ordering(967) 00:11:19.952 fused_ordering(968) 00:11:19.952 fused_ordering(969) 00:11:19.952 fused_ordering(970) 00:11:19.952 fused_ordering(971) 00:11:19.952 fused_ordering(972) 00:11:19.952 fused_ordering(973) 00:11:19.952 fused_ordering(974) 00:11:19.952 fused_ordering(975) 00:11:19.952 fused_ordering(976) 00:11:19.952 fused_ordering(977) 00:11:19.952 fused_ordering(978) 00:11:19.952 fused_ordering(979) 00:11:19.952 fused_ordering(980) 00:11:19.952 fused_ordering(981) 00:11:19.952 fused_ordering(982) 00:11:19.952 fused_ordering(983) 00:11:19.952 fused_ordering(984) 00:11:19.952 fused_ordering(985) 00:11:19.952 fused_ordering(986) 00:11:19.952 fused_ordering(987) 00:11:19.952 fused_ordering(988) 00:11:19.952 fused_ordering(989) 00:11:19.952 fused_ordering(990) 00:11:19.952 fused_ordering(991) 00:11:19.952 fused_ordering(992) 00:11:19.952 fused_ordering(993) 00:11:19.952 fused_ordering(994) 00:11:19.952 fused_ordering(995) 00:11:19.952 fused_ordering(996) 00:11:19.952 fused_ordering(997) 00:11:19.952 fused_ordering(998) 00:11:19.952 fused_ordering(999) 00:11:19.952 fused_ordering(1000) 00:11:19.952 fused_ordering(1001) 00:11:19.952 fused_ordering(1002) 00:11:19.952 fused_ordering(1003) 00:11:19.952 fused_ordering(1004) 00:11:19.952 fused_ordering(1005) 00:11:19.952 fused_ordering(1006) 00:11:19.952 fused_ordering(1007) 00:11:19.952 fused_ordering(1008) 00:11:19.952 fused_ordering(1009) 00:11:19.952 fused_ordering(1010) 00:11:19.952 fused_ordering(1011) 00:11:19.952 fused_ordering(1012) 00:11:19.952 fused_ordering(1013) 00:11:19.952 fused_ordering(1014) 00:11:19.952 fused_ordering(1015) 00:11:19.952 fused_ordering(1016) 00:11:19.952 fused_ordering(1017) 00:11:19.952 fused_ordering(1018) 00:11:19.952 fused_ordering(1019) 00:11:19.952 fused_ordering(1020) 00:11:19.952 fused_ordering(1021) 00:11:19.952 fused_ordering(1022) 00:11:19.952 fused_ordering(1023) 00:11:19.952 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:19.952 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:19.952 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.952 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:19.952 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:19.952 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:19.952 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:19.952 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.952 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:19.952 rmmod nvme_rdma 00:11:19.952 rmmod nvme_fabrics 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3737022 ']' 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3737022 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3737022 ']' 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3737022 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3737022 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3737022' 00:11:20.210 killing process with pid 3737022 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3737022 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3737022 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:20.210 00:11:20.210 real 0m7.177s 00:11:20.210 user 0m3.636s 00:11:20.210 sys 0m4.625s 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.210 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:20.210 ************************************ 00:11:20.210 END TEST nvmf_fused_ordering 00:11:20.210 ************************************ 00:11:20.468 16:24:14 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:11:20.468 16:24:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.468 16:24:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.468 16:24:14 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.468 ************************************ 00:11:20.468 START TEST nvmf_ns_masking 00:11:20.468 ************************************ 00:11:20.468 16:24:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:11:20.468 * Looking for test storage... 00:11:20.468 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:20.468 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.469 --rc genhtml_branch_coverage=1 00:11:20.469 --rc genhtml_function_coverage=1 00:11:20.469 --rc genhtml_legend=1 00:11:20.469 --rc geninfo_all_blocks=1 00:11:20.469 --rc geninfo_unexecuted_blocks=1 00:11:20.469 00:11:20.469 ' 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.469 --rc genhtml_branch_coverage=1 00:11:20.469 --rc genhtml_function_coverage=1 00:11:20.469 --rc genhtml_legend=1 00:11:20.469 --rc geninfo_all_blocks=1 00:11:20.469 --rc geninfo_unexecuted_blocks=1 00:11:20.469 00:11:20.469 ' 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.469 --rc genhtml_branch_coverage=1 00:11:20.469 --rc genhtml_function_coverage=1 00:11:20.469 --rc genhtml_legend=1 00:11:20.469 --rc geninfo_all_blocks=1 00:11:20.469 --rc geninfo_unexecuted_blocks=1 00:11:20.469 00:11:20.469 ' 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:20.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.469 --rc genhtml_branch_coverage=1 00:11:20.469 --rc genhtml_function_coverage=1 00:11:20.469 --rc genhtml_legend=1 00:11:20.469 --rc geninfo_all_blocks=1 00:11:20.469 --rc geninfo_unexecuted_blocks=1 00:11:20.469 00:11:20.469 ' 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:20.469 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=561a8598-4b06-4b71-9c57-b9cfe4f2e2d0 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a79556ee-912a-43b9-93a0-af3bdf0463b1 00:11:20.469 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7870be4c-1bcc-41b3-b65e-771c366df1fa 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.470 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.727 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:20.727 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:20.727 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:20.727 16:24:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:25.982 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:25.982 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:25.982 Found net devices under 0000:18:00.0: mlx_0_0 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.982 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:25.983 Found net devices under 0000:18:00.1: mlx_0_1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:25.983 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:25.983 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:25.983 altname enp24s0f0np0 00:11:25.983 altname ens785f0np0 00:11:25.983 inet 192.168.100.8/24 scope global mlx_0_0 00:11:25.983 valid_lft forever preferred_lft forever 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:25.983 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:25.983 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:25.983 altname enp24s0f1np1 00:11:25.983 altname ens785f1np1 00:11:25.983 inet 192.168.100.9/24 scope global mlx_0_1 00:11:25.983 valid_lft forever preferred_lft forever 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:25.983 192.168.100.9' 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:25.983 192.168.100.9' 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:11:25.983 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:25.984 192.168.100.9' 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3740431 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3740431 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3740431 ']' 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:25.984 [2024-12-06 16:24:20.404777] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:11:25.984 [2024-12-06 16:24:20.404823] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.984 [2024-12-06 16:24:20.462845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.984 [2024-12-06 16:24:20.500796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.984 [2024-12-06 16:24:20.500831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.984 [2024-12-06 16:24:20.500837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.984 [2024-12-06 16:24:20.500843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.984 [2024-12-06 16:24:20.500847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.984 [2024-12-06 16:24:20.501316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.984 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:26.243 [2024-12-06 16:24:20.798591] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf45dc0/0xf4a2b0) succeed. 00:11:26.243 [2024-12-06 16:24:20.806422] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf47270/0xf8b950) succeed. 00:11:26.243 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:26.243 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:26.243 16:24:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:26.502 Malloc1 00:11:26.502 16:24:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:26.502 Malloc2 00:11:26.502 16:24:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.760 16:24:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:27.018 16:24:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:27.018 [2024-12-06 16:24:21.699180] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:27.018 16:24:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:27.018 16:24:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7870be4c-1bcc-41b3-b65e-771c366df1fa -a 192.168.100.8 -s 4420 -i 4 00:11:27.275 16:24:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.275 16:24:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:27.276 16:24:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.276 16:24:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:27.276 16:24:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:29.390 [ 0]:0x1 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca444086afa341ed8a3384a34acb5d07 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca444086afa341ed8a3384a34acb5d07 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.390 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:29.647 [ 0]:0x1 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca444086afa341ed8a3384a34acb5d07 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca444086afa341ed8a3384a34acb5d07 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:29.647 [ 1]:0x2 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:29.647 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:29.905 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95b2f58a54664edeba5bf9c3f1def6c5 00:11:29.905 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95b2f58a54664edeba5bf9c3f1def6c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.905 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:29.905 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.163 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.419 16:24:24 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:30.419 16:24:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:30.419 16:24:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7870be4c-1bcc-41b3-b65e-771c366df1fa -a 192.168.100.8 -s 4420 -i 4 00:11:31.005 16:24:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:31.005 16:24:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:31.005 16:24:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.005 16:24:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:31.005 16:24:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:31.005 16:24:25 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:32.901 [ 0]:0x2 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95b2f58a54664edeba5bf9c3f1def6c5 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95b2f58a54664edeba5bf9c3f1def6c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.901 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:33.158 [ 0]:0x1 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca444086afa341ed8a3384a34acb5d07 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca444086afa341ed8a3384a34acb5d07 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:33.158 [ 1]:0x2 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95b2f58a54664edeba5bf9c3f1def6c5 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95b2f58a54664edeba5bf9c3f1def6c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:33.158 16:24:27 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:33.416 [ 0]:0x2 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:33.416 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:33.674 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95b2f58a54664edeba5bf9c3f1def6c5 00:11:33.674 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95b2f58a54664edeba5bf9c3f1def6c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:33.674 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:33.674 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.931 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:33.932 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:33.932 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7870be4c-1bcc-41b3-b65e-771c366df1fa -a 192.168.100.8 -s 4420 -i 4 00:11:34.497 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:34.497 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.497 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.497 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:34.497 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:34.497 16:24:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.400 16:24:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.400 16:24:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.400 16:24:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.400 16:24:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:36.400 16:24:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.400 16:24:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:36.400 16:24:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:36.400 16:24:30 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:36.400 [ 0]:0x1 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca444086afa341ed8a3384a34acb5d07 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca444086afa341ed8a3384a34acb5d07 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:36.400 [ 1]:0x2 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95b2f58a54664edeba5bf9c3f1def6c5 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95b2f58a54664edeba5bf9c3f1def6c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:36.400 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:36.658 [ 0]:0x2 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95b2f58a54664edeba5bf9c3f1def6c5 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95b2f58a54664edeba5bf9c3f1def6c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:36.658 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:36.916 [2024-12-06 16:24:31.542062] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:36.916 request: 00:11:36.916 { 00:11:36.916 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:36.916 "nsid": 2, 00:11:36.916 "host": "nqn.2016-06.io.spdk:host1", 00:11:36.916 "method": "nvmf_ns_remove_host", 00:11:36.916 "req_id": 1 00:11:36.916 } 00:11:36.916 Got JSON-RPC error response 00:11:36.916 response: 00:11:36.916 { 00:11:36.916 "code": -32602, 00:11:36.916 "message": "Invalid parameters" 00:11:36.916 } 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:36.916 [ 0]:0x2 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:36.916 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:37.173 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=95b2f58a54664edeba5bf9c3f1def6c5 00:11:37.173 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 95b2f58a54664edeba5bf9c3f1def6c5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:37.173 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:37.173 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.432 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3742705 00:11:37.432 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:37.432 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.432 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3742705 /var/tmp/host.sock 00:11:37.432 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3742705 ']' 00:11:37.432 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:37.432 16:24:31 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.432 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:37.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:37.432 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.432 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:37.432 [2024-12-06 16:24:32.050555] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:11:37.432 [2024-12-06 16:24:32.050598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3742705 ] 00:11:37.432 [2024-12-06 16:24:32.107683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.432 [2024-12-06 16:24:32.145457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.690 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.690 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:37.690 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.947 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:38.204 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 561a8598-4b06-4b71-9c57-b9cfe4f2e2d0 00:11:38.204 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:38.204 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 561A85984B064B719C57B9CFE4F2E2D0 -i 00:11:38.204 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a79556ee-912a-43b9-93a0-af3bdf0463b1 00:11:38.204 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:38.204 16:24:32 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A79556EE912A43B993A0AF3BDF0463B1 -i 00:11:38.461 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:38.719 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:38.719 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:38.719 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:38.976 nvme0n1 00:11:38.976 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:38.976 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:39.234 nvme1n2 00:11:39.234 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:39.234 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:39.234 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:39.234 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:39.234 16:24:33 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:39.492 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:39.492 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:39.492 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:39.492 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:39.750 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 561a8598-4b06-4b71-9c57-b9cfe4f2e2d0 == \5\6\1\a\8\5\9\8\-\4\b\0\6\-\4\b\7\1\-\9\c\5\7\-\b\9\c\f\e\4\f\2\e\2\d\0 ]] 00:11:39.750 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:39.750 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:39.750 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:39.750 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a79556ee-912a-43b9-93a0-af3bdf0463b1 == \a\7\9\5\5\6\e\e\-\9\1\2\a\-\4\3\b\9\-\9\3\a\0\-\a\f\3\b\d\f\0\4\6\3\b\1 ]] 00:11:39.750 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.007 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 561a8598-4b06-4b71-9c57-b9cfe4f2e2d0 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 561A85984B064B719C57B9CFE4F2E2D0 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 561A85984B064B719C57B9CFE4F2E2D0 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 561A85984B064B719C57B9CFE4F2E2D0 00:11:40.265 [2024-12-06 16:24:34.927477] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:11:40.265 [2024-12-06 16:24:34.927508] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:11:40.265 [2024-12-06 16:24:34.927516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.265 request: 00:11:40.265 { 00:11:40.265 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.265 "namespace": { 00:11:40.265 "bdev_name": "invalid", 00:11:40.265 "nsid": 1, 00:11:40.265 "nguid": "561A85984B064B719C57B9CFE4F2E2D0", 00:11:40.265 "no_auto_visible": false, 00:11:40.265 "hide_metadata": false 00:11:40.265 }, 00:11:40.265 "method": "nvmf_subsystem_add_ns", 00:11:40.265 "req_id": 1 00:11:40.265 } 00:11:40.265 Got JSON-RPC error response 00:11:40.265 response: 00:11:40.265 { 00:11:40.265 "code": -32602, 00:11:40.265 "message": "Invalid parameters" 00:11:40.265 } 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 561a8598-4b06-4b71-9c57-b9cfe4f2e2d0 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:40.265 16:24:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 561A85984B064B719C57B9CFE4F2E2D0 -i 00:11:40.522 16:24:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:11:42.418 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:11:42.418 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:11:42.418 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3742705 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3742705 ']' 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3742705 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3742705 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3742705' 00:11:42.675 killing process with pid 3742705 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3742705 00:11:42.675 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3742705 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:43.241 rmmod nvme_rdma 00:11:43.241 rmmod nvme_fabrics 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3740431 ']' 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3740431 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3740431 ']' 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3740431 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3740431 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3740431' 00:11:43.241 killing process with pid 3740431 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3740431 00:11:43.241 16:24:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3740431 00:11:43.500 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.500 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:43.500 00:11:43.500 real 0m23.190s 00:11:43.500 user 0m29.952s 00:11:43.500 sys 0m5.961s 00:11:43.500 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.500 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.500 ************************************ 00:11:43.500 END TEST nvmf_ns_masking 00:11:43.500 ************************************ 00:11:43.500 16:24:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:11:43.500 16:24:38 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:43.500 16:24:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.500 16:24:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.500 16:24:38 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.758 ************************************ 00:11:43.759 START TEST nvmf_nvme_cli 00:11:43.759 ************************************ 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:43.759 * Looking for test storage... 00:11:43.759 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:43.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.759 --rc genhtml_branch_coverage=1 00:11:43.759 --rc genhtml_function_coverage=1 00:11:43.759 --rc genhtml_legend=1 00:11:43.759 --rc geninfo_all_blocks=1 00:11:43.759 --rc geninfo_unexecuted_blocks=1 00:11:43.759 00:11:43.759 ' 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:43.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.759 --rc genhtml_branch_coverage=1 00:11:43.759 --rc genhtml_function_coverage=1 00:11:43.759 --rc genhtml_legend=1 00:11:43.759 --rc geninfo_all_blocks=1 00:11:43.759 --rc geninfo_unexecuted_blocks=1 00:11:43.759 00:11:43.759 ' 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:43.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.759 --rc genhtml_branch_coverage=1 00:11:43.759 --rc genhtml_function_coverage=1 00:11:43.759 --rc genhtml_legend=1 00:11:43.759 --rc geninfo_all_blocks=1 00:11:43.759 --rc geninfo_unexecuted_blocks=1 00:11:43.759 00:11:43.759 ' 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:43.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.759 --rc genhtml_branch_coverage=1 00:11:43.759 --rc genhtml_function_coverage=1 00:11:43.759 --rc genhtml_legend=1 00:11:43.759 --rc geninfo_all_blocks=1 00:11:43.759 --rc geninfo_unexecuted_blocks=1 00:11:43.759 00:11:43.759 ' 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.759 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.760 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.760 16:24:38 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:50.307 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:50.307 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:50.307 Found net devices under 0000:18:00.0: mlx_0_0 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:50.307 Found net devices under 0000:18:00.1: mlx_0_1 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:50.307 16:24:43 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:50.307 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:50.307 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:50.307 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:50.307 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:50.307 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:50.307 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:50.307 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.307 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.307 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:50.308 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:50.308 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:50.308 altname enp24s0f0np0 00:11:50.308 altname ens785f0np0 00:11:50.308 inet 192.168.100.8/24 scope global mlx_0_0 00:11:50.308 valid_lft forever preferred_lft forever 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:50.308 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:50.308 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:50.308 altname enp24s0f1np1 00:11:50.308 altname ens785f1np1 00:11:50.308 inet 192.168.100.9/24 scope global mlx_0_1 00:11:50.308 valid_lft forever preferred_lft forever 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:50.308 192.168.100.9' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:50.308 192.168.100.9' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:50.308 192.168.100.9' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3747173 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3747173 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3747173 ']' 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.308 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.308 [2024-12-06 16:24:44.210605] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:11:50.308 [2024-12-06 16:24:44.210653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.308 [2024-12-06 16:24:44.270397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.308 [2024-12-06 16:24:44.309576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.308 [2024-12-06 16:24:44.309613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.308 [2024-12-06 16:24:44.309620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.308 [2024-12-06 16:24:44.309625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.308 [2024-12-06 16:24:44.309630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.308 [2024-12-06 16:24:44.310814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.309 [2024-12-06 16:24:44.310909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.309 [2024-12-06 16:24:44.310968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.309 [2024-12-06 16:24:44.310970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 [2024-12-06 16:24:44.473935] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9380c0/0x93c5b0) succeed. 00:11:50.309 [2024-12-06 16:24:44.482098] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x939750/0x97dc50) succeed. 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 Malloc0 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 Malloc1 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 [2024-12-06 16:24:44.681128] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:11:50.309 00:11:50.309 Discovery Log Number of Records 2, Generation counter 2 00:11:50.309 =====Discovery Log Entry 0====== 00:11:50.309 trtype: rdma 00:11:50.309 adrfam: ipv4 00:11:50.309 subtype: current discovery subsystem 00:11:50.309 treq: not required 00:11:50.309 portid: 0 00:11:50.309 trsvcid: 4420 00:11:50.309 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:50.309 traddr: 192.168.100.8 00:11:50.309 eflags: explicit discovery connections, duplicate discovery information 00:11:50.309 rdma_prtype: not specified 00:11:50.309 rdma_qptype: connected 00:11:50.309 rdma_cms: rdma-cm 00:11:50.309 rdma_pkey: 0x0000 00:11:50.309 =====Discovery Log Entry 1====== 00:11:50.309 trtype: rdma 00:11:50.309 adrfam: ipv4 00:11:50.309 subtype: nvme subsystem 00:11:50.309 treq: not required 00:11:50.309 portid: 0 00:11:50.309 trsvcid: 4420 00:11:50.309 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:50.309 traddr: 192.168.100.8 00:11:50.309 eflags: none 00:11:50.309 rdma_prtype: not specified 00:11:50.309 rdma_qptype: connected 00:11:50.309 rdma_cms: rdma-cm 00:11:50.309 rdma_pkey: 0x0000 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:50.309 16:24:44 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:51.240 16:24:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:51.240 16:24:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:11:51.240 16:24:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.240 16:24:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:51.240 16:24:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:51.240 16:24:45 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:11:53.129 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:11:53.130 /dev/nvme0n2 ]] 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:53.130 16:24:47 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.497 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.497 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:11:54.497 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:54.497 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.497 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:54.497 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.497 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:11:54.497 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:54.497 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.497 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:54.498 rmmod nvme_rdma 00:11:54.498 rmmod nvme_fabrics 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3747173 ']' 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3747173 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3747173 ']' 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3747173 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.498 16:24:48 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3747173 00:11:54.498 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.498 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.498 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3747173' 00:11:54.498 killing process with pid 3747173 00:11:54.498 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3747173 00:11:54.498 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3747173 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:54.755 00:11:54.755 real 0m11.044s 00:11:54.755 user 0m21.382s 00:11:54.755 sys 0m4.844s 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:54.755 ************************************ 00:11:54.755 END TEST nvmf_nvme_cli 00:11:54.755 ************************************ 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.755 ************************************ 00:11:54.755 START TEST nvmf_auth_target 00:11:54.755 ************************************ 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:11:54.755 * Looking for test storage... 00:11:54.755 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:54.755 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:55.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.013 --rc genhtml_branch_coverage=1 00:11:55.013 --rc genhtml_function_coverage=1 00:11:55.013 --rc genhtml_legend=1 00:11:55.013 --rc geninfo_all_blocks=1 00:11:55.013 --rc geninfo_unexecuted_blocks=1 00:11:55.013 00:11:55.013 ' 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:55.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.013 --rc genhtml_branch_coverage=1 00:11:55.013 --rc genhtml_function_coverage=1 00:11:55.013 --rc genhtml_legend=1 00:11:55.013 --rc geninfo_all_blocks=1 00:11:55.013 --rc geninfo_unexecuted_blocks=1 00:11:55.013 00:11:55.013 ' 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:55.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.013 --rc genhtml_branch_coverage=1 00:11:55.013 --rc genhtml_function_coverage=1 00:11:55.013 --rc genhtml_legend=1 00:11:55.013 --rc geninfo_all_blocks=1 00:11:55.013 --rc geninfo_unexecuted_blocks=1 00:11:55.013 00:11:55.013 ' 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:55.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.013 --rc genhtml_branch_coverage=1 00:11:55.013 --rc genhtml_function_coverage=1 00:11:55.013 --rc genhtml_legend=1 00:11:55.013 --rc geninfo_all_blocks=1 00:11:55.013 --rc geninfo_unexecuted_blocks=1 00:11:55.013 00:11:55.013 ' 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.013 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.014 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.014 16:24:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:01.574 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:01.574 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:01.575 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:01.575 Found net devices under 0000:18:00.0: mlx_0_0 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:01.575 Found net devices under 0000:18:00.1: mlx_0_1 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:01.575 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:01.575 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:12:01.575 altname enp24s0f0np0 00:12:01.575 altname ens785f0np0 00:12:01.575 inet 192.168.100.8/24 scope global mlx_0_0 00:12:01.575 valid_lft forever preferred_lft forever 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:01.575 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:01.575 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:12:01.575 altname enp24s0f1np1 00:12:01.575 altname ens785f1np1 00:12:01.575 inet 192.168.100.9/24 scope global mlx_0_1 00:12:01.575 valid_lft forever preferred_lft forever 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:01.575 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:01.576 192.168.100.9' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:01.576 192.168.100.9' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:01.576 192.168.100.9' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3751327 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3751327 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3751327 ']' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3751511 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dc6e0488fa058a218dcea4f9be293ab545e3a759d0a33dfe 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XjB 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dc6e0488fa058a218dcea4f9be293ab545e3a759d0a33dfe 0 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dc6e0488fa058a218dcea4f9be293ab545e3a759d0a33dfe 0 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dc6e0488fa058a218dcea4f9be293ab545e3a759d0a33dfe 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XjB 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XjB 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.XjB 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f9eace1a624be6e2f47fe45f6ffdacc00babb8e82b0f9e1a82ba47e629879750 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.b4Y 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f9eace1a624be6e2f47fe45f6ffdacc00babb8e82b0f9e1a82ba47e629879750 3 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f9eace1a624be6e2f47fe45f6ffdacc00babb8e82b0f9e1a82ba47e629879750 3 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f9eace1a624be6e2f47fe45f6ffdacc00babb8e82b0f9e1a82ba47e629879750 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.b4Y 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.b4Y 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.b4Y 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=df36d1c26ff2683b45f9af8332c6b246 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:01.576 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.vwq 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key df36d1c26ff2683b45f9af8332c6b246 1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 df36d1c26ff2683b45f9af8332c6b246 1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=df36d1c26ff2683b45f9af8332c6b246 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.vwq 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.vwq 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.vwq 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c22c540102db66cb93572c21b457f0fe2e821b479ec6722a 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bwt 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c22c540102db66cb93572c21b457f0fe2e821b479ec6722a 2 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c22c540102db66cb93572c21b457f0fe2e821b479ec6722a 2 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c22c540102db66cb93572c21b457f0fe2e821b479ec6722a 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bwt 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bwt 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Bwt 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7cfc30c7053bd6c5ea6234d649e24b64af9af841375f746b 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wOk 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7cfc30c7053bd6c5ea6234d649e24b64af9af841375f746b 2 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7cfc30c7053bd6c5ea6234d649e24b64af9af841375f746b 2 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7cfc30c7053bd6c5ea6234d649e24b64af9af841375f746b 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wOk 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wOk 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.wOk 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=889378ce088442381daa84acdf612860 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.70o 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 889378ce088442381daa84acdf612860 1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 889378ce088442381daa84acdf612860 1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=889378ce088442381daa84acdf612860 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.70o 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.70o 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.70o 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e7b2878407634c65bf84c2f048626fe4e779c1ede9960b5c3101fc006c6fce33 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9wL 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e7b2878407634c65bf84c2f048626fe4e779c1ede9960b5c3101fc006c6fce33 3 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e7b2878407634c65bf84c2f048626fe4e779c1ede9960b5c3101fc006c6fce33 3 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e7b2878407634c65bf84c2f048626fe4e779c1ede9960b5c3101fc006c6fce33 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9wL 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9wL 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.9wL 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3751327 00:12:01.577 16:24:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3751327 ']' 00:12:01.577 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.577 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.577 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.577 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.577 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.577 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.577 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:01.578 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3751511 /var/tmp/host.sock 00:12:01.578 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3751511 ']' 00:12:01.578 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:01.578 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.578 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:01.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:01.578 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.578 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XjB 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.XjB 00:12:01.837 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.XjB 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.b4Y ]] 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b4Y 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b4Y 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b4Y 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vwq 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.094 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.352 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.352 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vwq 00:12:02.352 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vwq 00:12:02.352 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Bwt ]] 00:12:02.352 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bwt 00:12:02.352 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.352 16:24:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.352 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.352 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bwt 00:12:02.352 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bwt 00:12:02.610 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:02.610 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wOk 00:12:02.610 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.610 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.610 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.610 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wOk 00:12:02.610 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wOk 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.70o ]] 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.70o 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.70o 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.70o 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.9wL 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.9wL 00:12:02.868 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.9wL 00:12:03.126 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:03.126 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:03.126 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.126 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.126 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:03.126 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:03.384 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:03.384 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.385 16:24:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.642 00:12:03.642 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.642 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.642 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.642 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.642 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.642 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.642 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.642 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.642 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.642 { 00:12:03.642 "cntlid": 1, 00:12:03.642 "qid": 0, 00:12:03.642 "state": "enabled", 00:12:03.642 "thread": "nvmf_tgt_poll_group_000", 00:12:03.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:03.642 "listen_address": { 00:12:03.642 "trtype": "RDMA", 00:12:03.642 "adrfam": "IPv4", 00:12:03.642 "traddr": "192.168.100.8", 00:12:03.642 "trsvcid": "4420" 00:12:03.642 }, 00:12:03.642 "peer_address": { 00:12:03.642 "trtype": "RDMA", 00:12:03.642 "adrfam": "IPv4", 00:12:03.642 "traddr": "192.168.100.8", 00:12:03.642 "trsvcid": "57852" 00:12:03.642 }, 00:12:03.642 "auth": { 00:12:03.642 "state": "completed", 00:12:03.642 "digest": "sha256", 00:12:03.642 "dhgroup": "null" 00:12:03.642 } 00:12:03.642 } 00:12:03.642 ]' 00:12:03.642 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.900 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.900 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.900 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:03.900 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.900 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.900 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.900 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.158 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:04.158 16:24:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:04.723 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.723 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:04.723 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.723 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.723 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.723 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.723 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:04.723 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.981 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.239 00:12:05.239 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.239 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.239 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.496 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.496 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.496 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.496 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.496 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.496 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.496 { 00:12:05.496 "cntlid": 3, 00:12:05.496 "qid": 0, 00:12:05.496 "state": "enabled", 00:12:05.496 "thread": "nvmf_tgt_poll_group_000", 00:12:05.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:05.496 "listen_address": { 00:12:05.497 "trtype": "RDMA", 00:12:05.497 "adrfam": "IPv4", 00:12:05.497 "traddr": "192.168.100.8", 00:12:05.497 "trsvcid": "4420" 00:12:05.497 }, 00:12:05.497 "peer_address": { 00:12:05.497 "trtype": "RDMA", 00:12:05.497 "adrfam": "IPv4", 00:12:05.497 "traddr": "192.168.100.8", 00:12:05.497 "trsvcid": "46580" 00:12:05.497 }, 00:12:05.497 "auth": { 00:12:05.497 "state": "completed", 00:12:05.497 "digest": "sha256", 00:12:05.497 "dhgroup": "null" 00:12:05.497 } 00:12:05.497 } 00:12:05.497 ]' 00:12:05.497 16:24:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.497 16:25:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:05.497 16:25:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.497 16:25:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:05.497 16:25:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.497 16:25:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.497 16:25:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.497 16:25:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.754 16:25:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:05.754 16:25:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:06.320 16:25:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.320 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:06.320 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.320 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.321 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.321 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.321 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:06.321 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.578 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.835 00:12:06.835 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.835 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.835 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.094 { 00:12:07.094 "cntlid": 5, 00:12:07.094 "qid": 0, 00:12:07.094 "state": "enabled", 00:12:07.094 "thread": "nvmf_tgt_poll_group_000", 00:12:07.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:07.094 "listen_address": { 00:12:07.094 "trtype": "RDMA", 00:12:07.094 "adrfam": "IPv4", 00:12:07.094 "traddr": "192.168.100.8", 00:12:07.094 "trsvcid": "4420" 00:12:07.094 }, 00:12:07.094 "peer_address": { 00:12:07.094 "trtype": "RDMA", 00:12:07.094 "adrfam": "IPv4", 00:12:07.094 "traddr": "192.168.100.8", 00:12:07.094 "trsvcid": "40004" 00:12:07.094 }, 00:12:07.094 "auth": { 00:12:07.094 "state": "completed", 00:12:07.094 "digest": "sha256", 00:12:07.094 "dhgroup": "null" 00:12:07.094 } 00:12:07.094 } 00:12:07.094 ]' 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.094 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.352 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:07.352 16:25:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:07.919 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:08.177 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:08.178 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:08.178 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.178 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:08.178 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.178 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.178 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.178 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:08.178 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:08.178 16:25:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:08.436 00:12:08.436 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.436 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.436 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.695 { 00:12:08.695 "cntlid": 7, 00:12:08.695 "qid": 0, 00:12:08.695 "state": "enabled", 00:12:08.695 "thread": "nvmf_tgt_poll_group_000", 00:12:08.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:08.695 "listen_address": { 00:12:08.695 "trtype": "RDMA", 00:12:08.695 "adrfam": "IPv4", 00:12:08.695 "traddr": "192.168.100.8", 00:12:08.695 "trsvcid": "4420" 00:12:08.695 }, 00:12:08.695 "peer_address": { 00:12:08.695 "trtype": "RDMA", 00:12:08.695 "adrfam": "IPv4", 00:12:08.695 "traddr": "192.168.100.8", 00:12:08.695 "trsvcid": "37168" 00:12:08.695 }, 00:12:08.695 "auth": { 00:12:08.695 "state": "completed", 00:12:08.695 "digest": "sha256", 00:12:08.695 "dhgroup": "null" 00:12:08.695 } 00:12:08.695 } 00:12:08.695 ]' 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.695 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.952 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:08.952 16:25:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:09.519 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.777 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.035 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.035 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.035 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.035 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.035 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.293 { 00:12:10.293 "cntlid": 9, 00:12:10.293 "qid": 0, 00:12:10.293 "state": "enabled", 00:12:10.293 "thread": "nvmf_tgt_poll_group_000", 00:12:10.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:10.293 "listen_address": { 00:12:10.293 "trtype": "RDMA", 00:12:10.293 "adrfam": "IPv4", 00:12:10.293 "traddr": "192.168.100.8", 00:12:10.293 "trsvcid": "4420" 00:12:10.293 }, 00:12:10.293 "peer_address": { 00:12:10.293 "trtype": "RDMA", 00:12:10.293 "adrfam": "IPv4", 00:12:10.293 "traddr": "192.168.100.8", 00:12:10.293 "trsvcid": "54251" 00:12:10.293 }, 00:12:10.293 "auth": { 00:12:10.293 "state": "completed", 00:12:10.293 "digest": "sha256", 00:12:10.293 "dhgroup": "ffdhe2048" 00:12:10.293 } 00:12:10.293 } 00:12:10.293 ]' 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.293 16:25:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.550 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:10.550 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.550 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.550 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.550 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.550 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:10.550 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:11.113 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.370 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:11.370 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.370 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.370 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.370 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.370 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:11.370 16:25:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.627 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.884 00:12:11.884 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.884 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.884 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.884 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.884 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.884 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.884 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.884 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.884 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.884 { 00:12:11.884 "cntlid": 11, 00:12:11.884 "qid": 0, 00:12:11.884 "state": "enabled", 00:12:11.884 "thread": "nvmf_tgt_poll_group_000", 00:12:11.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:11.884 "listen_address": { 00:12:11.884 "trtype": "RDMA", 00:12:11.884 "adrfam": "IPv4", 00:12:11.884 "traddr": "192.168.100.8", 00:12:11.884 "trsvcid": "4420" 00:12:11.884 }, 00:12:11.884 "peer_address": { 00:12:11.884 "trtype": "RDMA", 00:12:11.884 "adrfam": "IPv4", 00:12:11.884 "traddr": "192.168.100.8", 00:12:11.884 "trsvcid": "47750" 00:12:11.884 }, 00:12:11.884 "auth": { 00:12:11.884 "state": "completed", 00:12:11.884 "digest": "sha256", 00:12:11.884 "dhgroup": "ffdhe2048" 00:12:11.884 } 00:12:11.884 } 00:12:11.884 ]' 00:12:11.884 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.141 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:12.141 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.141 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:12.141 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.141 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.141 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.141 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.399 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:12.399 16:25:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:12.963 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.963 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:12.963 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.963 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.963 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.963 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.963 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:12.963 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.221 16:25:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.479 00:12:13.479 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.479 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.479 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.737 { 00:12:13.737 "cntlid": 13, 00:12:13.737 "qid": 0, 00:12:13.737 "state": "enabled", 00:12:13.737 "thread": "nvmf_tgt_poll_group_000", 00:12:13.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:13.737 "listen_address": { 00:12:13.737 "trtype": "RDMA", 00:12:13.737 "adrfam": "IPv4", 00:12:13.737 "traddr": "192.168.100.8", 00:12:13.737 "trsvcid": "4420" 00:12:13.737 }, 00:12:13.737 "peer_address": { 00:12:13.737 "trtype": "RDMA", 00:12:13.737 "adrfam": "IPv4", 00:12:13.737 "traddr": "192.168.100.8", 00:12:13.737 "trsvcid": "43345" 00:12:13.737 }, 00:12:13.737 "auth": { 00:12:13.737 "state": "completed", 00:12:13.737 "digest": "sha256", 00:12:13.737 "dhgroup": "ffdhe2048" 00:12:13.737 } 00:12:13.737 } 00:12:13.737 ]' 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.737 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.995 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:13.995 16:25:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:14.560 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.560 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:14.560 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.560 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.560 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.560 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.560 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:14.560 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.817 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.074 00:12:15.074 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.074 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.074 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.332 { 00:12:15.332 "cntlid": 15, 00:12:15.332 "qid": 0, 00:12:15.332 "state": "enabled", 00:12:15.332 "thread": "nvmf_tgt_poll_group_000", 00:12:15.332 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:15.332 "listen_address": { 00:12:15.332 "trtype": "RDMA", 00:12:15.332 "adrfam": "IPv4", 00:12:15.332 "traddr": "192.168.100.8", 00:12:15.332 "trsvcid": "4420" 00:12:15.332 }, 00:12:15.332 "peer_address": { 00:12:15.332 "trtype": "RDMA", 00:12:15.332 "adrfam": "IPv4", 00:12:15.332 "traddr": "192.168.100.8", 00:12:15.332 "trsvcid": "48517" 00:12:15.332 }, 00:12:15.332 "auth": { 00:12:15.332 "state": "completed", 00:12:15.332 "digest": "sha256", 00:12:15.332 "dhgroup": "ffdhe2048" 00:12:15.332 } 00:12:15.332 } 00:12:15.332 ]' 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:15.332 16:25:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.332 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.332 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.332 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.590 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:15.590 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:16.156 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.414 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:16.414 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.414 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.414 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.414 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.414 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.414 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:16.414 16:25:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.414 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.672 00:12:16.672 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.672 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.672 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.928 { 00:12:16.928 "cntlid": 17, 00:12:16.928 "qid": 0, 00:12:16.928 "state": "enabled", 00:12:16.928 "thread": "nvmf_tgt_poll_group_000", 00:12:16.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:16.928 "listen_address": { 00:12:16.928 "trtype": "RDMA", 00:12:16.928 "adrfam": "IPv4", 00:12:16.928 "traddr": "192.168.100.8", 00:12:16.928 "trsvcid": "4420" 00:12:16.928 }, 00:12:16.928 "peer_address": { 00:12:16.928 "trtype": "RDMA", 00:12:16.928 "adrfam": "IPv4", 00:12:16.928 "traddr": "192.168.100.8", 00:12:16.928 "trsvcid": "45634" 00:12:16.928 }, 00:12:16.928 "auth": { 00:12:16.928 "state": "completed", 00:12:16.928 "digest": "sha256", 00:12:16.928 "dhgroup": "ffdhe3072" 00:12:16.928 } 00:12:16.928 } 00:12:16.928 ]' 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:16.928 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.185 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.185 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.185 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.185 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:17.185 16:25:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:17.747 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.003 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:18.003 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.003 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.003 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.003 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.003 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:18.003 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.261 16:25:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.519 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.519 { 00:12:18.519 "cntlid": 19, 00:12:18.519 "qid": 0, 00:12:18.519 "state": "enabled", 00:12:18.519 "thread": "nvmf_tgt_poll_group_000", 00:12:18.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:18.519 "listen_address": { 00:12:18.519 "trtype": "RDMA", 00:12:18.519 "adrfam": "IPv4", 00:12:18.519 "traddr": "192.168.100.8", 00:12:18.519 "trsvcid": "4420" 00:12:18.519 }, 00:12:18.519 "peer_address": { 00:12:18.519 "trtype": "RDMA", 00:12:18.519 "adrfam": "IPv4", 00:12:18.519 "traddr": "192.168.100.8", 00:12:18.519 "trsvcid": "50459" 00:12:18.519 }, 00:12:18.519 "auth": { 00:12:18.519 "state": "completed", 00:12:18.519 "digest": "sha256", 00:12:18.519 "dhgroup": "ffdhe3072" 00:12:18.519 } 00:12:18.519 } 00:12:18.519 ]' 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:18.519 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.775 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:18.775 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.775 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.775 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.775 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.032 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:19.032 16:25:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:19.597 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.597 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:19.597 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.597 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.597 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.597 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.597 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:19.597 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:19.854 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.855 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.111 00:12:20.111 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.111 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.111 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.111 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.111 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.111 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.111 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.368 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.368 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.368 { 00:12:20.368 "cntlid": 21, 00:12:20.368 "qid": 0, 00:12:20.368 "state": "enabled", 00:12:20.368 "thread": "nvmf_tgt_poll_group_000", 00:12:20.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:20.368 "listen_address": { 00:12:20.368 "trtype": "RDMA", 00:12:20.368 "adrfam": "IPv4", 00:12:20.368 "traddr": "192.168.100.8", 00:12:20.368 "trsvcid": "4420" 00:12:20.368 }, 00:12:20.368 "peer_address": { 00:12:20.368 "trtype": "RDMA", 00:12:20.368 "adrfam": "IPv4", 00:12:20.368 "traddr": "192.168.100.8", 00:12:20.368 "trsvcid": "45154" 00:12:20.368 }, 00:12:20.368 "auth": { 00:12:20.368 "state": "completed", 00:12:20.368 "digest": "sha256", 00:12:20.368 "dhgroup": "ffdhe3072" 00:12:20.368 } 00:12:20.368 } 00:12:20.368 ]' 00:12:20.368 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.368 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.368 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.368 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:20.368 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.368 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.368 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.368 16:25:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.625 16:25:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:20.625 16:25:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:21.189 16:25:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.189 16:25:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:21.189 16:25:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.189 16:25:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.189 16:25:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.189 16:25:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.189 16:25:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:21.189 16:25:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.446 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.704 00:12:21.704 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.704 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.704 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.961 { 00:12:21.961 "cntlid": 23, 00:12:21.961 "qid": 0, 00:12:21.961 "state": "enabled", 00:12:21.961 "thread": "nvmf_tgt_poll_group_000", 00:12:21.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:21.961 "listen_address": { 00:12:21.961 "trtype": "RDMA", 00:12:21.961 "adrfam": "IPv4", 00:12:21.961 "traddr": "192.168.100.8", 00:12:21.961 "trsvcid": "4420" 00:12:21.961 }, 00:12:21.961 "peer_address": { 00:12:21.961 "trtype": "RDMA", 00:12:21.961 "adrfam": "IPv4", 00:12:21.961 "traddr": "192.168.100.8", 00:12:21.961 "trsvcid": "44667" 00:12:21.961 }, 00:12:21.961 "auth": { 00:12:21.961 "state": "completed", 00:12:21.961 "digest": "sha256", 00:12:21.961 "dhgroup": "ffdhe3072" 00:12:21.961 } 00:12:21.961 } 00:12:21.961 ]' 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.961 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.219 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:22.219 16:25:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:22.784 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.784 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:22.784 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.784 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.784 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.784 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:22.784 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.784 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:22.784 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.042 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.301 00:12:23.301 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.301 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.301 16:25:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.559 { 00:12:23.559 "cntlid": 25, 00:12:23.559 "qid": 0, 00:12:23.559 "state": "enabled", 00:12:23.559 "thread": "nvmf_tgt_poll_group_000", 00:12:23.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:23.559 "listen_address": { 00:12:23.559 "trtype": "RDMA", 00:12:23.559 "adrfam": "IPv4", 00:12:23.559 "traddr": "192.168.100.8", 00:12:23.559 "trsvcid": "4420" 00:12:23.559 }, 00:12:23.559 "peer_address": { 00:12:23.559 "trtype": "RDMA", 00:12:23.559 "adrfam": "IPv4", 00:12:23.559 "traddr": "192.168.100.8", 00:12:23.559 "trsvcid": "35861" 00:12:23.559 }, 00:12:23.559 "auth": { 00:12:23.559 "state": "completed", 00:12:23.559 "digest": "sha256", 00:12:23.559 "dhgroup": "ffdhe4096" 00:12:23.559 } 00:12:23.559 } 00:12:23.559 ]' 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.559 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.818 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:23.818 16:25:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:24.401 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.659 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.660 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.917 00:12:24.917 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.917 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.917 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.174 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.174 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.174 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.174 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.174 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.174 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.174 { 00:12:25.174 "cntlid": 27, 00:12:25.174 "qid": 0, 00:12:25.174 "state": "enabled", 00:12:25.174 "thread": "nvmf_tgt_poll_group_000", 00:12:25.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:25.174 "listen_address": { 00:12:25.174 "trtype": "RDMA", 00:12:25.174 "adrfam": "IPv4", 00:12:25.174 "traddr": "192.168.100.8", 00:12:25.174 "trsvcid": "4420" 00:12:25.174 }, 00:12:25.174 "peer_address": { 00:12:25.174 "trtype": "RDMA", 00:12:25.174 "adrfam": "IPv4", 00:12:25.174 "traddr": "192.168.100.8", 00:12:25.174 "trsvcid": "55765" 00:12:25.174 }, 00:12:25.174 "auth": { 00:12:25.174 "state": "completed", 00:12:25.174 "digest": "sha256", 00:12:25.174 "dhgroup": "ffdhe4096" 00:12:25.174 } 00:12:25.174 } 00:12:25.174 ]' 00:12:25.174 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.175 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:25.175 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.175 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:25.175 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.433 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.433 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.433 16:25:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.433 16:25:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:25.433 16:25:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:25.998 16:25:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.255 16:25:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:26.255 16:25:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.255 16:25:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.255 16:25:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.255 16:25:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.255 16:25:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:26.255 16:25:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.513 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.771 00:12:26.771 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.771 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.771 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.771 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.029 { 00:12:27.029 "cntlid": 29, 00:12:27.029 "qid": 0, 00:12:27.029 "state": "enabled", 00:12:27.029 "thread": "nvmf_tgt_poll_group_000", 00:12:27.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:27.029 "listen_address": { 00:12:27.029 "trtype": "RDMA", 00:12:27.029 "adrfam": "IPv4", 00:12:27.029 "traddr": "192.168.100.8", 00:12:27.029 "trsvcid": "4420" 00:12:27.029 }, 00:12:27.029 "peer_address": { 00:12:27.029 "trtype": "RDMA", 00:12:27.029 "adrfam": "IPv4", 00:12:27.029 "traddr": "192.168.100.8", 00:12:27.029 "trsvcid": "60464" 00:12:27.029 }, 00:12:27.029 "auth": { 00:12:27.029 "state": "completed", 00:12:27.029 "digest": "sha256", 00:12:27.029 "dhgroup": "ffdhe4096" 00:12:27.029 } 00:12:27.029 } 00:12:27.029 ]' 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.029 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.287 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:27.287 16:25:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:27.852 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.852 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:27.852 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.852 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.852 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.852 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.852 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:27.852 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.110 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.369 00:12:28.369 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.369 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.369 16:25:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.627 { 00:12:28.627 "cntlid": 31, 00:12:28.627 "qid": 0, 00:12:28.627 "state": "enabled", 00:12:28.627 "thread": "nvmf_tgt_poll_group_000", 00:12:28.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:28.627 "listen_address": { 00:12:28.627 "trtype": "RDMA", 00:12:28.627 "adrfam": "IPv4", 00:12:28.627 "traddr": "192.168.100.8", 00:12:28.627 "trsvcid": "4420" 00:12:28.627 }, 00:12:28.627 "peer_address": { 00:12:28.627 "trtype": "RDMA", 00:12:28.627 "adrfam": "IPv4", 00:12:28.627 "traddr": "192.168.100.8", 00:12:28.627 "trsvcid": "48300" 00:12:28.627 }, 00:12:28.627 "auth": { 00:12:28.627 "state": "completed", 00:12:28.627 "digest": "sha256", 00:12:28.627 "dhgroup": "ffdhe4096" 00:12:28.627 } 00:12:28.627 } 00:12:28.627 ]' 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.627 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.885 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:28.885 16:25:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:29.451 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.708 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.709 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.709 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.709 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.709 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.709 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.274 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.274 { 00:12:30.274 "cntlid": 33, 00:12:30.274 "qid": 0, 00:12:30.274 "state": "enabled", 00:12:30.274 "thread": "nvmf_tgt_poll_group_000", 00:12:30.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:30.274 "listen_address": { 00:12:30.274 "trtype": "RDMA", 00:12:30.274 "adrfam": "IPv4", 00:12:30.274 "traddr": "192.168.100.8", 00:12:30.274 "trsvcid": "4420" 00:12:30.274 }, 00:12:30.274 "peer_address": { 00:12:30.274 "trtype": "RDMA", 00:12:30.274 "adrfam": "IPv4", 00:12:30.274 "traddr": "192.168.100.8", 00:12:30.274 "trsvcid": "48741" 00:12:30.274 }, 00:12:30.274 "auth": { 00:12:30.274 "state": "completed", 00:12:30.274 "digest": "sha256", 00:12:30.274 "dhgroup": "ffdhe6144" 00:12:30.274 } 00:12:30.274 } 00:12:30.274 ]' 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.274 16:25:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.534 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:30.534 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.534 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.534 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.534 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.534 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:30.534 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:31.466 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.466 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:31.466 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.466 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.466 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.466 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.466 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:31.466 16:25:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.466 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.724 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.981 { 00:12:31.981 "cntlid": 35, 00:12:31.981 "qid": 0, 00:12:31.981 "state": "enabled", 00:12:31.981 "thread": "nvmf_tgt_poll_group_000", 00:12:31.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:31.981 "listen_address": { 00:12:31.981 "trtype": "RDMA", 00:12:31.981 "adrfam": "IPv4", 00:12:31.981 "traddr": "192.168.100.8", 00:12:31.981 "trsvcid": "4420" 00:12:31.981 }, 00:12:31.981 "peer_address": { 00:12:31.981 "trtype": "RDMA", 00:12:31.981 "adrfam": "IPv4", 00:12:31.981 "traddr": "192.168.100.8", 00:12:31.981 "trsvcid": "38992" 00:12:31.981 }, 00:12:31.981 "auth": { 00:12:31.981 "state": "completed", 00:12:31.981 "digest": "sha256", 00:12:31.981 "dhgroup": "ffdhe6144" 00:12:31.981 } 00:12:31.981 } 00:12:31.981 ]' 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:31.981 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.239 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:32.239 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.239 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.239 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.239 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.239 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:32.239 16:25:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:32.804 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.061 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:33.061 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.061 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.061 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.061 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.061 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:33.061 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.319 16:25:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.577 00:12:33.577 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.577 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.577 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.834 { 00:12:33.834 "cntlid": 37, 00:12:33.834 "qid": 0, 00:12:33.834 "state": "enabled", 00:12:33.834 "thread": "nvmf_tgt_poll_group_000", 00:12:33.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:33.834 "listen_address": { 00:12:33.834 "trtype": "RDMA", 00:12:33.834 "adrfam": "IPv4", 00:12:33.834 "traddr": "192.168.100.8", 00:12:33.834 "trsvcid": "4420" 00:12:33.834 }, 00:12:33.834 "peer_address": { 00:12:33.834 "trtype": "RDMA", 00:12:33.834 "adrfam": "IPv4", 00:12:33.834 "traddr": "192.168.100.8", 00:12:33.834 "trsvcid": "39891" 00:12:33.834 }, 00:12:33.834 "auth": { 00:12:33.834 "state": "completed", 00:12:33.834 "digest": "sha256", 00:12:33.834 "dhgroup": "ffdhe6144" 00:12:33.834 } 00:12:33.834 } 00:12:33.834 ]' 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.834 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.092 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:34.092 16:25:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:34.657 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.657 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:34.657 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.657 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.657 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.657 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.657 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:34.657 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.915 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.172 00:12:35.429 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.429 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.429 16:25:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.429 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.429 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.429 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.429 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.429 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.429 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.429 { 00:12:35.429 "cntlid": 39, 00:12:35.429 "qid": 0, 00:12:35.429 "state": "enabled", 00:12:35.429 "thread": "nvmf_tgt_poll_group_000", 00:12:35.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:35.429 "listen_address": { 00:12:35.429 "trtype": "RDMA", 00:12:35.429 "adrfam": "IPv4", 00:12:35.429 "traddr": "192.168.100.8", 00:12:35.429 "trsvcid": "4420" 00:12:35.429 }, 00:12:35.429 "peer_address": { 00:12:35.429 "trtype": "RDMA", 00:12:35.429 "adrfam": "IPv4", 00:12:35.429 "traddr": "192.168.100.8", 00:12:35.429 "trsvcid": "40429" 00:12:35.429 }, 00:12:35.429 "auth": { 00:12:35.429 "state": "completed", 00:12:35.429 "digest": "sha256", 00:12:35.429 "dhgroup": "ffdhe6144" 00:12:35.429 } 00:12:35.429 } 00:12:35.429 ]' 00:12:35.429 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.429 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:35.429 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.686 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:35.686 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.686 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.686 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.686 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.686 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:35.686 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:36.250 16:25:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.508 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:36.508 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.508 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.508 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.508 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.508 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.508 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:36.508 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.786 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.108 00:12:37.108 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.108 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.108 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.442 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.442 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.442 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.442 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.442 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.442 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.442 { 00:12:37.442 "cntlid": 41, 00:12:37.442 "qid": 0, 00:12:37.442 "state": "enabled", 00:12:37.443 "thread": "nvmf_tgt_poll_group_000", 00:12:37.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:37.443 "listen_address": { 00:12:37.443 "trtype": "RDMA", 00:12:37.443 "adrfam": "IPv4", 00:12:37.443 "traddr": "192.168.100.8", 00:12:37.443 "trsvcid": "4420" 00:12:37.443 }, 00:12:37.443 "peer_address": { 00:12:37.443 "trtype": "RDMA", 00:12:37.443 "adrfam": "IPv4", 00:12:37.443 "traddr": "192.168.100.8", 00:12:37.443 "trsvcid": "58350" 00:12:37.443 }, 00:12:37.443 "auth": { 00:12:37.443 "state": "completed", 00:12:37.443 "digest": "sha256", 00:12:37.443 "dhgroup": "ffdhe8192" 00:12:37.443 } 00:12:37.443 } 00:12:37.443 ]' 00:12:37.443 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.443 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.443 16:25:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.443 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:37.443 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.443 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.443 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.443 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.746 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:37.746 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:38.311 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.311 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:38.311 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.311 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.311 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.311 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.311 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:38.311 16:25:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:38.568 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:38.568 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.568 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:38.569 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:38.569 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:38.569 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.569 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.569 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.569 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.569 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.569 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.569 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.569 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.133 00:12:39.133 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.133 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.133 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.133 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.133 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.133 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.133 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.391 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.391 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.391 { 00:12:39.391 "cntlid": 43, 00:12:39.391 "qid": 0, 00:12:39.391 "state": "enabled", 00:12:39.391 "thread": "nvmf_tgt_poll_group_000", 00:12:39.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:39.391 "listen_address": { 00:12:39.391 "trtype": "RDMA", 00:12:39.391 "adrfam": "IPv4", 00:12:39.391 "traddr": "192.168.100.8", 00:12:39.391 "trsvcid": "4420" 00:12:39.391 }, 00:12:39.391 "peer_address": { 00:12:39.391 "trtype": "RDMA", 00:12:39.391 "adrfam": "IPv4", 00:12:39.391 "traddr": "192.168.100.8", 00:12:39.391 "trsvcid": "57578" 00:12:39.391 }, 00:12:39.391 "auth": { 00:12:39.391 "state": "completed", 00:12:39.391 "digest": "sha256", 00:12:39.391 "dhgroup": "ffdhe8192" 00:12:39.391 } 00:12:39.391 } 00:12:39.391 ]' 00:12:39.391 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.391 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.391 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.391 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:39.391 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.391 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.391 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.391 16:25:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.650 16:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:39.650 16:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:40.216 16:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.216 16:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:40.216 16:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.216 16:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.216 16:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.216 16:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.216 16:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:40.216 16:25:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.482 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.047 00:12:41.047 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.047 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.047 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.047 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.047 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.047 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.047 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.047 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.047 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.047 { 00:12:41.047 "cntlid": 45, 00:12:41.047 "qid": 0, 00:12:41.047 "state": "enabled", 00:12:41.047 "thread": "nvmf_tgt_poll_group_000", 00:12:41.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:41.047 "listen_address": { 00:12:41.047 "trtype": "RDMA", 00:12:41.047 "adrfam": "IPv4", 00:12:41.047 "traddr": "192.168.100.8", 00:12:41.047 "trsvcid": "4420" 00:12:41.047 }, 00:12:41.047 "peer_address": { 00:12:41.047 "trtype": "RDMA", 00:12:41.047 "adrfam": "IPv4", 00:12:41.047 "traddr": "192.168.100.8", 00:12:41.047 "trsvcid": "43932" 00:12:41.047 }, 00:12:41.047 "auth": { 00:12:41.047 "state": "completed", 00:12:41.047 "digest": "sha256", 00:12:41.047 "dhgroup": "ffdhe8192" 00:12:41.047 } 00:12:41.047 } 00:12:41.047 ]' 00:12:41.047 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.305 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:41.305 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.305 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:41.305 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.305 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.305 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.305 16:25:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.562 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:41.562 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:42.128 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.128 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:42.128 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.128 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.128 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.128 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.128 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:42.128 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.386 16:25:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:42.951 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.951 { 00:12:42.951 "cntlid": 47, 00:12:42.951 "qid": 0, 00:12:42.951 "state": "enabled", 00:12:42.951 "thread": "nvmf_tgt_poll_group_000", 00:12:42.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:42.951 "listen_address": { 00:12:42.951 "trtype": "RDMA", 00:12:42.951 "adrfam": "IPv4", 00:12:42.951 "traddr": "192.168.100.8", 00:12:42.951 "trsvcid": "4420" 00:12:42.951 }, 00:12:42.951 "peer_address": { 00:12:42.951 "trtype": "RDMA", 00:12:42.951 "adrfam": "IPv4", 00:12:42.951 "traddr": "192.168.100.8", 00:12:42.951 "trsvcid": "34101" 00:12:42.951 }, 00:12:42.951 "auth": { 00:12:42.951 "state": "completed", 00:12:42.951 "digest": "sha256", 00:12:42.951 "dhgroup": "ffdhe8192" 00:12:42.951 } 00:12:42.951 } 00:12:42.951 ]' 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.951 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.212 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:43.212 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.212 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.212 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.212 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.469 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:43.469 16:25:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:44.033 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.033 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:44.033 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.033 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.033 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.033 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:44.033 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.033 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.033 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:44.033 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.291 16:25:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.549 00:12:44.549 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.549 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.549 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.549 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.549 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.549 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.549 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.549 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.549 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.549 { 00:12:44.549 "cntlid": 49, 00:12:44.549 "qid": 0, 00:12:44.549 "state": "enabled", 00:12:44.549 "thread": "nvmf_tgt_poll_group_000", 00:12:44.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:44.549 "listen_address": { 00:12:44.549 "trtype": "RDMA", 00:12:44.549 "adrfam": "IPv4", 00:12:44.549 "traddr": "192.168.100.8", 00:12:44.549 "trsvcid": "4420" 00:12:44.549 }, 00:12:44.549 "peer_address": { 00:12:44.549 "trtype": "RDMA", 00:12:44.549 "adrfam": "IPv4", 00:12:44.549 "traddr": "192.168.100.8", 00:12:44.549 "trsvcid": "37075" 00:12:44.549 }, 00:12:44.549 "auth": { 00:12:44.549 "state": "completed", 00:12:44.549 "digest": "sha384", 00:12:44.549 "dhgroup": "null" 00:12:44.549 } 00:12:44.549 } 00:12:44.549 ]' 00:12:44.549 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.807 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.807 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.807 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:44.807 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.807 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.807 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.807 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.064 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:45.064 16:25:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:45.650 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.650 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:45.650 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.650 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.650 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.650 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.650 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:45.650 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.906 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.163 00:12:46.163 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.163 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.163 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.163 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.421 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.421 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.421 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.421 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.421 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.421 { 00:12:46.421 "cntlid": 51, 00:12:46.421 "qid": 0, 00:12:46.421 "state": "enabled", 00:12:46.421 "thread": "nvmf_tgt_poll_group_000", 00:12:46.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:46.421 "listen_address": { 00:12:46.421 "trtype": "RDMA", 00:12:46.421 "adrfam": "IPv4", 00:12:46.421 "traddr": "192.168.100.8", 00:12:46.421 "trsvcid": "4420" 00:12:46.421 }, 00:12:46.421 "peer_address": { 00:12:46.421 "trtype": "RDMA", 00:12:46.421 "adrfam": "IPv4", 00:12:46.421 "traddr": "192.168.100.8", 00:12:46.421 "trsvcid": "51948" 00:12:46.421 }, 00:12:46.421 "auth": { 00:12:46.421 "state": "completed", 00:12:46.421 "digest": "sha384", 00:12:46.421 "dhgroup": "null" 00:12:46.421 } 00:12:46.421 } 00:12:46.421 ]' 00:12:46.421 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.421 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.421 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.421 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:46.421 16:25:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.421 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.421 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.421 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.679 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:46.679 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:47.243 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.243 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:47.243 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.243 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.244 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.244 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.244 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:47.244 16:25:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.501 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.759 00:12:47.759 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.759 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.759 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.017 { 00:12:48.017 "cntlid": 53, 00:12:48.017 "qid": 0, 00:12:48.017 "state": "enabled", 00:12:48.017 "thread": "nvmf_tgt_poll_group_000", 00:12:48.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:48.017 "listen_address": { 00:12:48.017 "trtype": "RDMA", 00:12:48.017 "adrfam": "IPv4", 00:12:48.017 "traddr": "192.168.100.8", 00:12:48.017 "trsvcid": "4420" 00:12:48.017 }, 00:12:48.017 "peer_address": { 00:12:48.017 "trtype": "RDMA", 00:12:48.017 "adrfam": "IPv4", 00:12:48.017 "traddr": "192.168.100.8", 00:12:48.017 "trsvcid": "44542" 00:12:48.017 }, 00:12:48.017 "auth": { 00:12:48.017 "state": "completed", 00:12:48.017 "digest": "sha384", 00:12:48.017 "dhgroup": "null" 00:12:48.017 } 00:12:48.017 } 00:12:48.017 ]' 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.017 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.275 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:48.275 16:25:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:48.840 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.840 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:48.840 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.840 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.840 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.840 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.840 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:48.840 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.098 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.356 00:12:49.356 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.356 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.356 16:25:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.614 { 00:12:49.614 "cntlid": 55, 00:12:49.614 "qid": 0, 00:12:49.614 "state": "enabled", 00:12:49.614 "thread": "nvmf_tgt_poll_group_000", 00:12:49.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:49.614 "listen_address": { 00:12:49.614 "trtype": "RDMA", 00:12:49.614 "adrfam": "IPv4", 00:12:49.614 "traddr": "192.168.100.8", 00:12:49.614 "trsvcid": "4420" 00:12:49.614 }, 00:12:49.614 "peer_address": { 00:12:49.614 "trtype": "RDMA", 00:12:49.614 "adrfam": "IPv4", 00:12:49.614 "traddr": "192.168.100.8", 00:12:49.614 "trsvcid": "39705" 00:12:49.614 }, 00:12:49.614 "auth": { 00:12:49.614 "state": "completed", 00:12:49.614 "digest": "sha384", 00:12:49.614 "dhgroup": "null" 00:12:49.614 } 00:12:49.614 } 00:12:49.614 ]' 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.614 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.871 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:49.871 16:25:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:50.435 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.435 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:50.435 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.435 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.435 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.435 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.435 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.435 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:50.435 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.692 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.949 00:12:50.949 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.949 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.949 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.207 { 00:12:51.207 "cntlid": 57, 00:12:51.207 "qid": 0, 00:12:51.207 "state": "enabled", 00:12:51.207 "thread": "nvmf_tgt_poll_group_000", 00:12:51.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:51.207 "listen_address": { 00:12:51.207 "trtype": "RDMA", 00:12:51.207 "adrfam": "IPv4", 00:12:51.207 "traddr": "192.168.100.8", 00:12:51.207 "trsvcid": "4420" 00:12:51.207 }, 00:12:51.207 "peer_address": { 00:12:51.207 "trtype": "RDMA", 00:12:51.207 "adrfam": "IPv4", 00:12:51.207 "traddr": "192.168.100.8", 00:12:51.207 "trsvcid": "47688" 00:12:51.207 }, 00:12:51.207 "auth": { 00:12:51.207 "state": "completed", 00:12:51.207 "digest": "sha384", 00:12:51.207 "dhgroup": "ffdhe2048" 00:12:51.207 } 00:12:51.207 } 00:12:51.207 ]' 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.207 16:25:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.464 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:51.464 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:52.028 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.285 16:25:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.542 00:12:52.542 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.542 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.542 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.798 { 00:12:52.798 "cntlid": 59, 00:12:52.798 "qid": 0, 00:12:52.798 "state": "enabled", 00:12:52.798 "thread": "nvmf_tgt_poll_group_000", 00:12:52.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:52.798 "listen_address": { 00:12:52.798 "trtype": "RDMA", 00:12:52.798 "adrfam": "IPv4", 00:12:52.798 "traddr": "192.168.100.8", 00:12:52.798 "trsvcid": "4420" 00:12:52.798 }, 00:12:52.798 "peer_address": { 00:12:52.798 "trtype": "RDMA", 00:12:52.798 "adrfam": "IPv4", 00:12:52.798 "traddr": "192.168.100.8", 00:12:52.798 "trsvcid": "32867" 00:12:52.798 }, 00:12:52.798 "auth": { 00:12:52.798 "state": "completed", 00:12:52.798 "digest": "sha384", 00:12:52.798 "dhgroup": "ffdhe2048" 00:12:52.798 } 00:12:52.798 } 00:12:52.798 ]' 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.798 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.055 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:53.055 16:25:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:53.619 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.877 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.135 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.135 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.135 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.135 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.135 00:12:54.135 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.135 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.135 16:25:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.392 { 00:12:54.392 "cntlid": 61, 00:12:54.392 "qid": 0, 00:12:54.392 "state": "enabled", 00:12:54.392 "thread": "nvmf_tgt_poll_group_000", 00:12:54.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:54.392 "listen_address": { 00:12:54.392 "trtype": "RDMA", 00:12:54.392 "adrfam": "IPv4", 00:12:54.392 "traddr": "192.168.100.8", 00:12:54.392 "trsvcid": "4420" 00:12:54.392 }, 00:12:54.392 "peer_address": { 00:12:54.392 "trtype": "RDMA", 00:12:54.392 "adrfam": "IPv4", 00:12:54.392 "traddr": "192.168.100.8", 00:12:54.392 "trsvcid": "41469" 00:12:54.392 }, 00:12:54.392 "auth": { 00:12:54.392 "state": "completed", 00:12:54.392 "digest": "sha384", 00:12:54.392 "dhgroup": "ffdhe2048" 00:12:54.392 } 00:12:54.392 } 00:12:54.392 ]' 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:54.392 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.650 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.650 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.650 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.650 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:54.650 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:12:55.581 16:25:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.581 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.840 00:12:55.840 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.840 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.840 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.099 { 00:12:56.099 "cntlid": 63, 00:12:56.099 "qid": 0, 00:12:56.099 "state": "enabled", 00:12:56.099 "thread": "nvmf_tgt_poll_group_000", 00:12:56.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:56.099 "listen_address": { 00:12:56.099 "trtype": "RDMA", 00:12:56.099 "adrfam": "IPv4", 00:12:56.099 "traddr": "192.168.100.8", 00:12:56.099 "trsvcid": "4420" 00:12:56.099 }, 00:12:56.099 "peer_address": { 00:12:56.099 "trtype": "RDMA", 00:12:56.099 "adrfam": "IPv4", 00:12:56.099 "traddr": "192.168.100.8", 00:12:56.099 "trsvcid": "53871" 00:12:56.099 }, 00:12:56.099 "auth": { 00:12:56.099 "state": "completed", 00:12:56.099 "digest": "sha384", 00:12:56.099 "dhgroup": "ffdhe2048" 00:12:56.099 } 00:12:56.099 } 00:12:56.099 ]' 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:56.099 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.356 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.356 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.356 16:25:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.356 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:56.356 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:12:56.922 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.180 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:57.180 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.180 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.180 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.180 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:57.180 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.180 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:57.180 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.438 16:25:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.438 00:12:57.438 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.438 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.438 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.695 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.696 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.696 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.696 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.696 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.696 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.696 { 00:12:57.696 "cntlid": 65, 00:12:57.696 "qid": 0, 00:12:57.696 "state": "enabled", 00:12:57.696 "thread": "nvmf_tgt_poll_group_000", 00:12:57.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:57.696 "listen_address": { 00:12:57.696 "trtype": "RDMA", 00:12:57.696 "adrfam": "IPv4", 00:12:57.696 "traddr": "192.168.100.8", 00:12:57.696 "trsvcid": "4420" 00:12:57.696 }, 00:12:57.696 "peer_address": { 00:12:57.696 "trtype": "RDMA", 00:12:57.696 "adrfam": "IPv4", 00:12:57.696 "traddr": "192.168.100.8", 00:12:57.696 "trsvcid": "43170" 00:12:57.696 }, 00:12:57.696 "auth": { 00:12:57.696 "state": "completed", 00:12:57.696 "digest": "sha384", 00:12:57.696 "dhgroup": "ffdhe3072" 00:12:57.696 } 00:12:57.696 } 00:12:57.696 ]' 00:12:57.696 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.696 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.696 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.953 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:57.953 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.953 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.953 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.953 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.953 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:57.953 16:25:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:12:58.529 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.787 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.044 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.044 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.044 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.044 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.044 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.044 00:12:59.045 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.045 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.045 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.302 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.302 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.302 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.302 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.302 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.302 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.302 { 00:12:59.302 "cntlid": 67, 00:12:59.302 "qid": 0, 00:12:59.302 "state": "enabled", 00:12:59.302 "thread": "nvmf_tgt_poll_group_000", 00:12:59.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:59.302 "listen_address": { 00:12:59.302 "trtype": "RDMA", 00:12:59.302 "adrfam": "IPv4", 00:12:59.302 "traddr": "192.168.100.8", 00:12:59.302 "trsvcid": "4420" 00:12:59.302 }, 00:12:59.302 "peer_address": { 00:12:59.302 "trtype": "RDMA", 00:12:59.302 "adrfam": "IPv4", 00:12:59.302 "traddr": "192.168.100.8", 00:12:59.302 "trsvcid": "46583" 00:12:59.302 }, 00:12:59.302 "auth": { 00:12:59.302 "state": "completed", 00:12:59.302 "digest": "sha384", 00:12:59.302 "dhgroup": "ffdhe3072" 00:12:59.302 } 00:12:59.302 } 00:12:59.302 ]' 00:12:59.302 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.302 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:59.302 16:25:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.560 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:59.560 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.560 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.560 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.560 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.560 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:12:59.560 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:00.123 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.380 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:00.380 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.380 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.380 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.380 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.380 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:00.380 16:25:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.637 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.893 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.893 { 00:13:00.893 "cntlid": 69, 00:13:00.893 "qid": 0, 00:13:00.893 "state": "enabled", 00:13:00.893 "thread": "nvmf_tgt_poll_group_000", 00:13:00.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:00.893 "listen_address": { 00:13:00.893 "trtype": "RDMA", 00:13:00.893 "adrfam": "IPv4", 00:13:00.893 "traddr": "192.168.100.8", 00:13:00.893 "trsvcid": "4420" 00:13:00.893 }, 00:13:00.893 "peer_address": { 00:13:00.893 "trtype": "RDMA", 00:13:00.893 "adrfam": "IPv4", 00:13:00.893 "traddr": "192.168.100.8", 00:13:00.893 "trsvcid": "38065" 00:13:00.893 }, 00:13:00.893 "auth": { 00:13:00.893 "state": "completed", 00:13:00.893 "digest": "sha384", 00:13:00.893 "dhgroup": "ffdhe3072" 00:13:00.893 } 00:13:00.893 } 00:13:00.893 ]' 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.893 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:01.150 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.150 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:01.150 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.150 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.150 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.150 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.407 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:01.407 16:25:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:01.971 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.971 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:01.971 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.971 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.971 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.971 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.971 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:01.971 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.227 16:25:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.483 00:13:02.483 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.483 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.483 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.740 { 00:13:02.740 "cntlid": 71, 00:13:02.740 "qid": 0, 00:13:02.740 "state": "enabled", 00:13:02.740 "thread": "nvmf_tgt_poll_group_000", 00:13:02.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:02.740 "listen_address": { 00:13:02.740 "trtype": "RDMA", 00:13:02.740 "adrfam": "IPv4", 00:13:02.740 "traddr": "192.168.100.8", 00:13:02.740 "trsvcid": "4420" 00:13:02.740 }, 00:13:02.740 "peer_address": { 00:13:02.740 "trtype": "RDMA", 00:13:02.740 "adrfam": "IPv4", 00:13:02.740 "traddr": "192.168.100.8", 00:13:02.740 "trsvcid": "44907" 00:13:02.740 }, 00:13:02.740 "auth": { 00:13:02.740 "state": "completed", 00:13:02.740 "digest": "sha384", 00:13:02.740 "dhgroup": "ffdhe3072" 00:13:02.740 } 00:13:02.740 } 00:13:02.740 ]' 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.740 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.997 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:02.997 16:25:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:03.560 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.560 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:03.560 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.560 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.560 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.560 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.560 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.560 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.560 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.816 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.072 00:13:04.072 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.072 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.072 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.329 { 00:13:04.329 "cntlid": 73, 00:13:04.329 "qid": 0, 00:13:04.329 "state": "enabled", 00:13:04.329 "thread": "nvmf_tgt_poll_group_000", 00:13:04.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:04.329 "listen_address": { 00:13:04.329 "trtype": "RDMA", 00:13:04.329 "adrfam": "IPv4", 00:13:04.329 "traddr": "192.168.100.8", 00:13:04.329 "trsvcid": "4420" 00:13:04.329 }, 00:13:04.329 "peer_address": { 00:13:04.329 "trtype": "RDMA", 00:13:04.329 "adrfam": "IPv4", 00:13:04.329 "traddr": "192.168.100.8", 00:13:04.329 "trsvcid": "37070" 00:13:04.329 }, 00:13:04.329 "auth": { 00:13:04.329 "state": "completed", 00:13:04.329 "digest": "sha384", 00:13:04.329 "dhgroup": "ffdhe4096" 00:13:04.329 } 00:13:04.329 } 00:13:04.329 ]' 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:04.329 16:25:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.329 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.329 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.329 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.586 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:04.586 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:05.149 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.406 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:05.406 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.406 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.406 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.406 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.406 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:05.406 16:25:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.406 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.663 00:13:05.663 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.663 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.663 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.920 { 00:13:05.920 "cntlid": 75, 00:13:05.920 "qid": 0, 00:13:05.920 "state": "enabled", 00:13:05.920 "thread": "nvmf_tgt_poll_group_000", 00:13:05.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:05.920 "listen_address": { 00:13:05.920 "trtype": "RDMA", 00:13:05.920 "adrfam": "IPv4", 00:13:05.920 "traddr": "192.168.100.8", 00:13:05.920 "trsvcid": "4420" 00:13:05.920 }, 00:13:05.920 "peer_address": { 00:13:05.920 "trtype": "RDMA", 00:13:05.920 "adrfam": "IPv4", 00:13:05.920 "traddr": "192.168.100.8", 00:13:05.920 "trsvcid": "52788" 00:13:05.920 }, 00:13:05.920 "auth": { 00:13:05.920 "state": "completed", 00:13:05.920 "digest": "sha384", 00:13:05.920 "dhgroup": "ffdhe4096" 00:13:05.920 } 00:13:05.920 } 00:13:05.920 ]' 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:05.920 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.209 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.209 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.209 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.209 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:06.209 16:26:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:06.809 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.065 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.066 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.322 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.322 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.322 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.322 16:26:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.322 00:13:07.579 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.579 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.579 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.579 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.579 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.579 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.579 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.579 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.579 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.579 { 00:13:07.579 "cntlid": 77, 00:13:07.579 "qid": 0, 00:13:07.579 "state": "enabled", 00:13:07.579 "thread": "nvmf_tgt_poll_group_000", 00:13:07.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:07.579 "listen_address": { 00:13:07.579 "trtype": "RDMA", 00:13:07.579 "adrfam": "IPv4", 00:13:07.579 "traddr": "192.168.100.8", 00:13:07.579 "trsvcid": "4420" 00:13:07.579 }, 00:13:07.579 "peer_address": { 00:13:07.579 "trtype": "RDMA", 00:13:07.579 "adrfam": "IPv4", 00:13:07.579 "traddr": "192.168.100.8", 00:13:07.579 "trsvcid": "57970" 00:13:07.579 }, 00:13:07.579 "auth": { 00:13:07.579 "state": "completed", 00:13:07.579 "digest": "sha384", 00:13:07.579 "dhgroup": "ffdhe4096" 00:13:07.579 } 00:13:07.579 } 00:13:07.579 ]' 00:13:07.579 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.836 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.836 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.836 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:07.836 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.836 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.836 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.836 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.094 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:08.094 16:26:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:08.657 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.657 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:08.657 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.657 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.657 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.657 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.657 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:08.658 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.914 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.172 00:13:09.172 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.172 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.172 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.429 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.429 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.429 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.429 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.429 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.429 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.429 { 00:13:09.429 "cntlid": 79, 00:13:09.429 "qid": 0, 00:13:09.429 "state": "enabled", 00:13:09.429 "thread": "nvmf_tgt_poll_group_000", 00:13:09.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:09.429 "listen_address": { 00:13:09.429 "trtype": "RDMA", 00:13:09.429 "adrfam": "IPv4", 00:13:09.429 "traddr": "192.168.100.8", 00:13:09.429 "trsvcid": "4420" 00:13:09.429 }, 00:13:09.429 "peer_address": { 00:13:09.429 "trtype": "RDMA", 00:13:09.429 "adrfam": "IPv4", 00:13:09.429 "traddr": "192.168.100.8", 00:13:09.429 "trsvcid": "57688" 00:13:09.429 }, 00:13:09.429 "auth": { 00:13:09.429 "state": "completed", 00:13:09.429 "digest": "sha384", 00:13:09.429 "dhgroup": "ffdhe4096" 00:13:09.429 } 00:13:09.429 } 00:13:09.429 ]' 00:13:09.429 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.429 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.429 16:26:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.429 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:09.429 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.429 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.429 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.429 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.685 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:09.685 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:10.248 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.248 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:10.248 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.248 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.248 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.248 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.248 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.248 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:10.248 16:26:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.504 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.761 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.018 { 00:13:11.018 "cntlid": 81, 00:13:11.018 "qid": 0, 00:13:11.018 "state": "enabled", 00:13:11.018 "thread": "nvmf_tgt_poll_group_000", 00:13:11.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:11.018 "listen_address": { 00:13:11.018 "trtype": "RDMA", 00:13:11.018 "adrfam": "IPv4", 00:13:11.018 "traddr": "192.168.100.8", 00:13:11.018 "trsvcid": "4420" 00:13:11.018 }, 00:13:11.018 "peer_address": { 00:13:11.018 "trtype": "RDMA", 00:13:11.018 "adrfam": "IPv4", 00:13:11.018 "traddr": "192.168.100.8", 00:13:11.018 "trsvcid": "57104" 00:13:11.018 }, 00:13:11.018 "auth": { 00:13:11.018 "state": "completed", 00:13:11.018 "digest": "sha384", 00:13:11.018 "dhgroup": "ffdhe6144" 00:13:11.018 } 00:13:11.018 } 00:13:11.018 ]' 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.018 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.275 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:11.275 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.275 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.275 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.275 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.275 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:11.275 16:26:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.204 16:26:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.460 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.717 { 00:13:12.717 "cntlid": 83, 00:13:12.717 "qid": 0, 00:13:12.717 "state": "enabled", 00:13:12.717 "thread": "nvmf_tgt_poll_group_000", 00:13:12.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:12.717 "listen_address": { 00:13:12.717 "trtype": "RDMA", 00:13:12.717 "adrfam": "IPv4", 00:13:12.717 "traddr": "192.168.100.8", 00:13:12.717 "trsvcid": "4420" 00:13:12.717 }, 00:13:12.717 "peer_address": { 00:13:12.717 "trtype": "RDMA", 00:13:12.717 "adrfam": "IPv4", 00:13:12.717 "traddr": "192.168.100.8", 00:13:12.717 "trsvcid": "46801" 00:13:12.717 }, 00:13:12.717 "auth": { 00:13:12.717 "state": "completed", 00:13:12.717 "digest": "sha384", 00:13:12.717 "dhgroup": "ffdhe6144" 00:13:12.717 } 00:13:12.717 } 00:13:12.717 ]' 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:12.717 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.973 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:12.973 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.973 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.973 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.973 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.229 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:13.229 16:26:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:13.791 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.791 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:13.791 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.791 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.791 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.791 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.791 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:13.791 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.048 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.304 00:13:14.304 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.304 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.304 16:26:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.559 { 00:13:14.559 "cntlid": 85, 00:13:14.559 "qid": 0, 00:13:14.559 "state": "enabled", 00:13:14.559 "thread": "nvmf_tgt_poll_group_000", 00:13:14.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:14.559 "listen_address": { 00:13:14.559 "trtype": "RDMA", 00:13:14.559 "adrfam": "IPv4", 00:13:14.559 "traddr": "192.168.100.8", 00:13:14.559 "trsvcid": "4420" 00:13:14.559 }, 00:13:14.559 "peer_address": { 00:13:14.559 "trtype": "RDMA", 00:13:14.559 "adrfam": "IPv4", 00:13:14.559 "traddr": "192.168.100.8", 00:13:14.559 "trsvcid": "46045" 00:13:14.559 }, 00:13:14.559 "auth": { 00:13:14.559 "state": "completed", 00:13:14.559 "digest": "sha384", 00:13:14.559 "dhgroup": "ffdhe6144" 00:13:14.559 } 00:13:14.559 } 00:13:14.559 ]' 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.559 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.815 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:14.815 16:26:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:15.380 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.636 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.637 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:15.637 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.637 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:16.200 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.200 { 00:13:16.200 "cntlid": 87, 00:13:16.200 "qid": 0, 00:13:16.200 "state": "enabled", 00:13:16.200 "thread": "nvmf_tgt_poll_group_000", 00:13:16.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:16.200 "listen_address": { 00:13:16.200 "trtype": "RDMA", 00:13:16.200 "adrfam": "IPv4", 00:13:16.200 "traddr": "192.168.100.8", 00:13:16.200 "trsvcid": "4420" 00:13:16.200 }, 00:13:16.200 "peer_address": { 00:13:16.200 "trtype": "RDMA", 00:13:16.200 "adrfam": "IPv4", 00:13:16.200 "traddr": "192.168.100.8", 00:13:16.200 "trsvcid": "40219" 00:13:16.200 }, 00:13:16.200 "auth": { 00:13:16.200 "state": "completed", 00:13:16.200 "digest": "sha384", 00:13:16.200 "dhgroup": "ffdhe6144" 00:13:16.200 } 00:13:16.200 } 00:13:16.200 ]' 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:16.200 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.457 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:16.457 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.457 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.457 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.457 16:26:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.457 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:16.457 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:17.385 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.385 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:17.385 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.385 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.385 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.385 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:17.385 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.385 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:17.385 16:26:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.385 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.948 00:13:17.948 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.948 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.948 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.204 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.204 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.204 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.204 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.204 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.204 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.204 { 00:13:18.204 "cntlid": 89, 00:13:18.204 "qid": 0, 00:13:18.204 "state": "enabled", 00:13:18.204 "thread": "nvmf_tgt_poll_group_000", 00:13:18.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:18.204 "listen_address": { 00:13:18.204 "trtype": "RDMA", 00:13:18.204 "adrfam": "IPv4", 00:13:18.204 "traddr": "192.168.100.8", 00:13:18.204 "trsvcid": "4420" 00:13:18.204 }, 00:13:18.204 "peer_address": { 00:13:18.204 "trtype": "RDMA", 00:13:18.204 "adrfam": "IPv4", 00:13:18.204 "traddr": "192.168.100.8", 00:13:18.204 "trsvcid": "41959" 00:13:18.204 }, 00:13:18.204 "auth": { 00:13:18.204 "state": "completed", 00:13:18.204 "digest": "sha384", 00:13:18.205 "dhgroup": "ffdhe8192" 00:13:18.205 } 00:13:18.205 } 00:13:18.205 ]' 00:13:18.205 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.205 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:18.205 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.205 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:18.205 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.205 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.205 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.205 16:26:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.461 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:18.461 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:19.025 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.025 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:19.025 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.025 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.025 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.025 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.025 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:19.025 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.282 16:26:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.844 00:13:19.844 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.844 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.844 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.844 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.844 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.844 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.844 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.101 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.101 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.101 { 00:13:20.101 "cntlid": 91, 00:13:20.101 "qid": 0, 00:13:20.101 "state": "enabled", 00:13:20.101 "thread": "nvmf_tgt_poll_group_000", 00:13:20.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:20.101 "listen_address": { 00:13:20.101 "trtype": "RDMA", 00:13:20.101 "adrfam": "IPv4", 00:13:20.101 "traddr": "192.168.100.8", 00:13:20.101 "trsvcid": "4420" 00:13:20.101 }, 00:13:20.101 "peer_address": { 00:13:20.101 "trtype": "RDMA", 00:13:20.101 "adrfam": "IPv4", 00:13:20.101 "traddr": "192.168.100.8", 00:13:20.101 "trsvcid": "50507" 00:13:20.101 }, 00:13:20.101 "auth": { 00:13:20.101 "state": "completed", 00:13:20.101 "digest": "sha384", 00:13:20.101 "dhgroup": "ffdhe8192" 00:13:20.101 } 00:13:20.101 } 00:13:20.101 ]' 00:13:20.101 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.101 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.101 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.101 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:20.101 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.101 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.101 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.101 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.357 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:20.357 16:26:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:20.927 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.927 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:20.927 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.927 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.927 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.927 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.927 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:20.927 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.183 16:26:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.746 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.746 { 00:13:21.746 "cntlid": 93, 00:13:21.746 "qid": 0, 00:13:21.746 "state": "enabled", 00:13:21.746 "thread": "nvmf_tgt_poll_group_000", 00:13:21.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:21.746 "listen_address": { 00:13:21.746 "trtype": "RDMA", 00:13:21.746 "adrfam": "IPv4", 00:13:21.746 "traddr": "192.168.100.8", 00:13:21.746 "trsvcid": "4420" 00:13:21.746 }, 00:13:21.746 "peer_address": { 00:13:21.746 "trtype": "RDMA", 00:13:21.746 "adrfam": "IPv4", 00:13:21.746 "traddr": "192.168.100.8", 00:13:21.746 "trsvcid": "37843" 00:13:21.746 }, 00:13:21.746 "auth": { 00:13:21.746 "state": "completed", 00:13:21.746 "digest": "sha384", 00:13:21.746 "dhgroup": "ffdhe8192" 00:13:21.746 } 00:13:21.746 } 00:13:21.746 ]' 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.746 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.002 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:22.002 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.002 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.002 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.002 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.002 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:22.002 16:26:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.931 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.932 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:22.932 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.932 16:26:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.495 00:13:23.495 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.495 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.495 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.752 { 00:13:23.752 "cntlid": 95, 00:13:23.752 "qid": 0, 00:13:23.752 "state": "enabled", 00:13:23.752 "thread": "nvmf_tgt_poll_group_000", 00:13:23.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:23.752 "listen_address": { 00:13:23.752 "trtype": "RDMA", 00:13:23.752 "adrfam": "IPv4", 00:13:23.752 "traddr": "192.168.100.8", 00:13:23.752 "trsvcid": "4420" 00:13:23.752 }, 00:13:23.752 "peer_address": { 00:13:23.752 "trtype": "RDMA", 00:13:23.752 "adrfam": "IPv4", 00:13:23.752 "traddr": "192.168.100.8", 00:13:23.752 "trsvcid": "48630" 00:13:23.752 }, 00:13:23.752 "auth": { 00:13:23.752 "state": "completed", 00:13:23.752 "digest": "sha384", 00:13:23.752 "dhgroup": "ffdhe8192" 00:13:23.752 } 00:13:23.752 } 00:13:23.752 ]' 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.752 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.009 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:24.009 16:26:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:24.572 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.572 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:24.572 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.572 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.572 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.572 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:24.572 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:24.572 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.572 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.572 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.828 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.085 00:13:25.085 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.085 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.085 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.342 { 00:13:25.342 "cntlid": 97, 00:13:25.342 "qid": 0, 00:13:25.342 "state": "enabled", 00:13:25.342 "thread": "nvmf_tgt_poll_group_000", 00:13:25.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:25.342 "listen_address": { 00:13:25.342 "trtype": "RDMA", 00:13:25.342 "adrfam": "IPv4", 00:13:25.342 "traddr": "192.168.100.8", 00:13:25.342 "trsvcid": "4420" 00:13:25.342 }, 00:13:25.342 "peer_address": { 00:13:25.342 "trtype": "RDMA", 00:13:25.342 "adrfam": "IPv4", 00:13:25.342 "traddr": "192.168.100.8", 00:13:25.342 "trsvcid": "34590" 00:13:25.342 }, 00:13:25.342 "auth": { 00:13:25.342 "state": "completed", 00:13:25.342 "digest": "sha512", 00:13:25.342 "dhgroup": "null" 00:13:25.342 } 00:13:25.342 } 00:13:25.342 ]' 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:25.342 16:26:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.342 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.342 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.342 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.599 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:25.599 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:26.161 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.418 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:26.418 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.418 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.418 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.418 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.418 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:26.418 16:26:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.418 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.674 00:13:26.674 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.674 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.674 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.931 { 00:13:26.931 "cntlid": 99, 00:13:26.931 "qid": 0, 00:13:26.931 "state": "enabled", 00:13:26.931 "thread": "nvmf_tgt_poll_group_000", 00:13:26.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:26.931 "listen_address": { 00:13:26.931 "trtype": "RDMA", 00:13:26.931 "adrfam": "IPv4", 00:13:26.931 "traddr": "192.168.100.8", 00:13:26.931 "trsvcid": "4420" 00:13:26.931 }, 00:13:26.931 "peer_address": { 00:13:26.931 "trtype": "RDMA", 00:13:26.931 "adrfam": "IPv4", 00:13:26.931 "traddr": "192.168.100.8", 00:13:26.931 "trsvcid": "37939" 00:13:26.931 }, 00:13:26.931 "auth": { 00:13:26.931 "state": "completed", 00:13:26.931 "digest": "sha512", 00:13:26.931 "dhgroup": "null" 00:13:26.931 } 00:13:26.931 } 00:13:26.931 ]' 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.931 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.187 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:27.187 16:26:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:27.750 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.006 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.263 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.263 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.263 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.263 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.263 00:13:28.263 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.263 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.263 16:26:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.520 { 00:13:28.520 "cntlid": 101, 00:13:28.520 "qid": 0, 00:13:28.520 "state": "enabled", 00:13:28.520 "thread": "nvmf_tgt_poll_group_000", 00:13:28.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:28.520 "listen_address": { 00:13:28.520 "trtype": "RDMA", 00:13:28.520 "adrfam": "IPv4", 00:13:28.520 "traddr": "192.168.100.8", 00:13:28.520 "trsvcid": "4420" 00:13:28.520 }, 00:13:28.520 "peer_address": { 00:13:28.520 "trtype": "RDMA", 00:13:28.520 "adrfam": "IPv4", 00:13:28.520 "traddr": "192.168.100.8", 00:13:28.520 "trsvcid": "59931" 00:13:28.520 }, 00:13:28.520 "auth": { 00:13:28.520 "state": "completed", 00:13:28.520 "digest": "sha512", 00:13:28.520 "dhgroup": "null" 00:13:28.520 } 00:13:28.520 } 00:13:28.520 ]' 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:28.520 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.777 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.777 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.777 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.777 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:28.777 16:26:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:29.340 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.596 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:29.596 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.596 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.596 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.596 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.596 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:29.596 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.852 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.852 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.109 { 00:13:30.109 "cntlid": 103, 00:13:30.109 "qid": 0, 00:13:30.109 "state": "enabled", 00:13:30.109 "thread": "nvmf_tgt_poll_group_000", 00:13:30.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:30.109 "listen_address": { 00:13:30.109 "trtype": "RDMA", 00:13:30.109 "adrfam": "IPv4", 00:13:30.109 "traddr": "192.168.100.8", 00:13:30.109 "trsvcid": "4420" 00:13:30.109 }, 00:13:30.109 "peer_address": { 00:13:30.109 "trtype": "RDMA", 00:13:30.109 "adrfam": "IPv4", 00:13:30.109 "traddr": "192.168.100.8", 00:13:30.109 "trsvcid": "40062" 00:13:30.109 }, 00:13:30.109 "auth": { 00:13:30.109 "state": "completed", 00:13:30.109 "digest": "sha512", 00:13:30.109 "dhgroup": "null" 00:13:30.109 } 00:13:30.109 } 00:13:30.109 ]' 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:30.109 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.366 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:30.366 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.366 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.366 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.366 16:26:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.621 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:30.621 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:31.182 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.182 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:31.182 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.182 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.182 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.182 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:31.182 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.182 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:31.182 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:31.438 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:31.438 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.438 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:31.438 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:31.438 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:31.439 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.439 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.439 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.439 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.439 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.439 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.439 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.439 16:26:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.694 00:13:31.694 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.694 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.694 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.949 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.949 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.949 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.949 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.949 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.949 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.949 { 00:13:31.949 "cntlid": 105, 00:13:31.949 "qid": 0, 00:13:31.949 "state": "enabled", 00:13:31.949 "thread": "nvmf_tgt_poll_group_000", 00:13:31.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:31.949 "listen_address": { 00:13:31.949 "trtype": "RDMA", 00:13:31.949 "adrfam": "IPv4", 00:13:31.949 "traddr": "192.168.100.8", 00:13:31.949 "trsvcid": "4420" 00:13:31.949 }, 00:13:31.949 "peer_address": { 00:13:31.949 "trtype": "RDMA", 00:13:31.949 "adrfam": "IPv4", 00:13:31.949 "traddr": "192.168.100.8", 00:13:31.949 "trsvcid": "51606" 00:13:31.949 }, 00:13:31.949 "auth": { 00:13:31.949 "state": "completed", 00:13:31.949 "digest": "sha512", 00:13:31.949 "dhgroup": "ffdhe2048" 00:13:31.949 } 00:13:31.949 } 00:13:31.949 ]' 00:13:31.949 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.949 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.949 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.950 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.950 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.950 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.950 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.950 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.206 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:32.206 16:26:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:32.770 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.770 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:32.770 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.770 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.770 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.770 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.770 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:32.770 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.027 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.283 00:13:33.283 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.283 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.283 16:26:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.540 { 00:13:33.540 "cntlid": 107, 00:13:33.540 "qid": 0, 00:13:33.540 "state": "enabled", 00:13:33.540 "thread": "nvmf_tgt_poll_group_000", 00:13:33.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:33.540 "listen_address": { 00:13:33.540 "trtype": "RDMA", 00:13:33.540 "adrfam": "IPv4", 00:13:33.540 "traddr": "192.168.100.8", 00:13:33.540 "trsvcid": "4420" 00:13:33.540 }, 00:13:33.540 "peer_address": { 00:13:33.540 "trtype": "RDMA", 00:13:33.540 "adrfam": "IPv4", 00:13:33.540 "traddr": "192.168.100.8", 00:13:33.540 "trsvcid": "56131" 00:13:33.540 }, 00:13:33.540 "auth": { 00:13:33.540 "state": "completed", 00:13:33.540 "digest": "sha512", 00:13:33.540 "dhgroup": "ffdhe2048" 00:13:33.540 } 00:13:33.540 } 00:13:33.540 ]' 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.540 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.797 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:33.797 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:34.361 16:26:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.361 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:34.361 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.361 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.361 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.361 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.361 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:34.361 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.618 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.875 00:13:34.875 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.875 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.875 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.132 { 00:13:35.132 "cntlid": 109, 00:13:35.132 "qid": 0, 00:13:35.132 "state": "enabled", 00:13:35.132 "thread": "nvmf_tgt_poll_group_000", 00:13:35.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:35.132 "listen_address": { 00:13:35.132 "trtype": "RDMA", 00:13:35.132 "adrfam": "IPv4", 00:13:35.132 "traddr": "192.168.100.8", 00:13:35.132 "trsvcid": "4420" 00:13:35.132 }, 00:13:35.132 "peer_address": { 00:13:35.132 "trtype": "RDMA", 00:13:35.132 "adrfam": "IPv4", 00:13:35.132 "traddr": "192.168.100.8", 00:13:35.132 "trsvcid": "57572" 00:13:35.132 }, 00:13:35.132 "auth": { 00:13:35.132 "state": "completed", 00:13:35.132 "digest": "sha512", 00:13:35.132 "dhgroup": "ffdhe2048" 00:13:35.132 } 00:13:35.132 } 00:13:35.132 ]' 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.132 16:26:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.391 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:35.391 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:35.951 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.208 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:36.208 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.208 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.208 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.208 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.208 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:36.208 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:36.464 16:26:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:36.464 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.744 { 00:13:36.744 "cntlid": 111, 00:13:36.744 "qid": 0, 00:13:36.744 "state": "enabled", 00:13:36.744 "thread": "nvmf_tgt_poll_group_000", 00:13:36.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:36.744 "listen_address": { 00:13:36.744 "trtype": "RDMA", 00:13:36.744 "adrfam": "IPv4", 00:13:36.744 "traddr": "192.168.100.8", 00:13:36.744 "trsvcid": "4420" 00:13:36.744 }, 00:13:36.744 "peer_address": { 00:13:36.744 "trtype": "RDMA", 00:13:36.744 "adrfam": "IPv4", 00:13:36.744 "traddr": "192.168.100.8", 00:13:36.744 "trsvcid": "47590" 00:13:36.744 }, 00:13:36.744 "auth": { 00:13:36.744 "state": "completed", 00:13:36.744 "digest": "sha512", 00:13:36.744 "dhgroup": "ffdhe2048" 00:13:36.744 } 00:13:36.744 } 00:13:36.744 ]' 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:36.744 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.001 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.001 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.001 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.001 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:37.001 16:26:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:37.565 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.823 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.081 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.081 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.081 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.081 00:13:38.339 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.339 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.339 16:26:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.339 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.339 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.339 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.339 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.339 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.339 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.339 { 00:13:38.339 "cntlid": 113, 00:13:38.339 "qid": 0, 00:13:38.339 "state": "enabled", 00:13:38.339 "thread": "nvmf_tgt_poll_group_000", 00:13:38.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:38.339 "listen_address": { 00:13:38.339 "trtype": "RDMA", 00:13:38.339 "adrfam": "IPv4", 00:13:38.339 "traddr": "192.168.100.8", 00:13:38.339 "trsvcid": "4420" 00:13:38.339 }, 00:13:38.339 "peer_address": { 00:13:38.339 "trtype": "RDMA", 00:13:38.339 "adrfam": "IPv4", 00:13:38.339 "traddr": "192.168.100.8", 00:13:38.339 "trsvcid": "53126" 00:13:38.339 }, 00:13:38.339 "auth": { 00:13:38.339 "state": "completed", 00:13:38.339 "digest": "sha512", 00:13:38.339 "dhgroup": "ffdhe3072" 00:13:38.339 } 00:13:38.339 } 00:13:38.339 ]' 00:13:38.339 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.339 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.339 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.597 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:38.597 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.597 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.597 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.597 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.597 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:38.598 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:39.162 16:26:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.420 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:39.420 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.420 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.420 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.420 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.420 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:39.420 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.677 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.934 00:13:39.934 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.934 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.934 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.934 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.934 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.934 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.934 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.934 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.934 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.934 { 00:13:39.934 "cntlid": 115, 00:13:39.934 "qid": 0, 00:13:39.934 "state": "enabled", 00:13:39.934 "thread": "nvmf_tgt_poll_group_000", 00:13:39.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:39.934 "listen_address": { 00:13:39.934 "trtype": "RDMA", 00:13:39.934 "adrfam": "IPv4", 00:13:39.934 "traddr": "192.168.100.8", 00:13:39.934 "trsvcid": "4420" 00:13:39.934 }, 00:13:39.934 "peer_address": { 00:13:39.934 "trtype": "RDMA", 00:13:39.934 "adrfam": "IPv4", 00:13:39.934 "traddr": "192.168.100.8", 00:13:39.934 "trsvcid": "46025" 00:13:39.934 }, 00:13:39.934 "auth": { 00:13:39.934 "state": "completed", 00:13:39.934 "digest": "sha512", 00:13:39.934 "dhgroup": "ffdhe3072" 00:13:39.934 } 00:13:39.934 } 00:13:39.934 ]' 00:13:39.934 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.191 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:40.191 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.191 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:40.191 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.191 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.191 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.191 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.449 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:40.449 16:26:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:41.012 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.012 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:41.012 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.012 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.012 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.012 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.012 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:41.012 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.268 16:26:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.523 00:13:41.523 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.523 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.523 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.780 { 00:13:41.780 "cntlid": 117, 00:13:41.780 "qid": 0, 00:13:41.780 "state": "enabled", 00:13:41.780 "thread": "nvmf_tgt_poll_group_000", 00:13:41.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:41.780 "listen_address": { 00:13:41.780 "trtype": "RDMA", 00:13:41.780 "adrfam": "IPv4", 00:13:41.780 "traddr": "192.168.100.8", 00:13:41.780 "trsvcid": "4420" 00:13:41.780 }, 00:13:41.780 "peer_address": { 00:13:41.780 "trtype": "RDMA", 00:13:41.780 "adrfam": "IPv4", 00:13:41.780 "traddr": "192.168.100.8", 00:13:41.780 "trsvcid": "46967" 00:13:41.780 }, 00:13:41.780 "auth": { 00:13:41.780 "state": "completed", 00:13:41.780 "digest": "sha512", 00:13:41.780 "dhgroup": "ffdhe3072" 00:13:41.780 } 00:13:41.780 } 00:13:41.780 ]' 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.780 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.036 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:42.036 16:26:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:42.599 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.599 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:42.599 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.599 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.599 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.599 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.599 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:42.599 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:42.855 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.856 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.112 00:13:43.112 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.112 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.112 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.368 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.368 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.368 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.368 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.368 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.368 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.368 { 00:13:43.368 "cntlid": 119, 00:13:43.368 "qid": 0, 00:13:43.368 "state": "enabled", 00:13:43.368 "thread": "nvmf_tgt_poll_group_000", 00:13:43.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:43.368 "listen_address": { 00:13:43.368 "trtype": "RDMA", 00:13:43.368 "adrfam": "IPv4", 00:13:43.368 "traddr": "192.168.100.8", 00:13:43.368 "trsvcid": "4420" 00:13:43.368 }, 00:13:43.368 "peer_address": { 00:13:43.368 "trtype": "RDMA", 00:13:43.368 "adrfam": "IPv4", 00:13:43.368 "traddr": "192.168.100.8", 00:13:43.368 "trsvcid": "38052" 00:13:43.368 }, 00:13:43.368 "auth": { 00:13:43.368 "state": "completed", 00:13:43.368 "digest": "sha512", 00:13:43.368 "dhgroup": "ffdhe3072" 00:13:43.368 } 00:13:43.368 } 00:13:43.368 ]' 00:13:43.368 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.368 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.368 16:26:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.368 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:43.368 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.368 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.368 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.368 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.624 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:43.624 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:44.188 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.445 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:44.445 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.445 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.445 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.445 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.445 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.445 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:44.445 16:26:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.445 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.702 00:13:44.702 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.702 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.702 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.959 { 00:13:44.959 "cntlid": 121, 00:13:44.959 "qid": 0, 00:13:44.959 "state": "enabled", 00:13:44.959 "thread": "nvmf_tgt_poll_group_000", 00:13:44.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:44.959 "listen_address": { 00:13:44.959 "trtype": "RDMA", 00:13:44.959 "adrfam": "IPv4", 00:13:44.959 "traddr": "192.168.100.8", 00:13:44.959 "trsvcid": "4420" 00:13:44.959 }, 00:13:44.959 "peer_address": { 00:13:44.959 "trtype": "RDMA", 00:13:44.959 "adrfam": "IPv4", 00:13:44.959 "traddr": "192.168.100.8", 00:13:44.959 "trsvcid": "48331" 00:13:44.959 }, 00:13:44.959 "auth": { 00:13:44.959 "state": "completed", 00:13:44.959 "digest": "sha512", 00:13:44.959 "dhgroup": "ffdhe4096" 00:13:44.959 } 00:13:44.959 } 00:13:44.959 ]' 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:44.959 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.215 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.215 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.215 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.215 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:45.215 16:26:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:45.778 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.035 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:46.035 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.035 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.035 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.035 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.035 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:46.035 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.291 16:26:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.548 00:13:46.548 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.548 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.548 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.548 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.548 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.548 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.548 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.548 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.548 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.548 { 00:13:46.548 "cntlid": 123, 00:13:46.548 "qid": 0, 00:13:46.548 "state": "enabled", 00:13:46.548 "thread": "nvmf_tgt_poll_group_000", 00:13:46.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:46.548 "listen_address": { 00:13:46.548 "trtype": "RDMA", 00:13:46.548 "adrfam": "IPv4", 00:13:46.548 "traddr": "192.168.100.8", 00:13:46.548 "trsvcid": "4420" 00:13:46.548 }, 00:13:46.548 "peer_address": { 00:13:46.548 "trtype": "RDMA", 00:13:46.548 "adrfam": "IPv4", 00:13:46.548 "traddr": "192.168.100.8", 00:13:46.548 "trsvcid": "35932" 00:13:46.548 }, 00:13:46.548 "auth": { 00:13:46.548 "state": "completed", 00:13:46.548 "digest": "sha512", 00:13:46.548 "dhgroup": "ffdhe4096" 00:13:46.548 } 00:13:46.548 } 00:13:46.548 ]' 00:13:46.548 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.803 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.803 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.803 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:46.803 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.803 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.803 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.804 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.062 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:47.062 16:26:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:47.666 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.666 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:47.666 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.666 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.666 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.666 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.666 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:47.666 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:47.963 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.964 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:47.964 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.220 { 00:13:48.220 "cntlid": 125, 00:13:48.220 "qid": 0, 00:13:48.220 "state": "enabled", 00:13:48.220 "thread": "nvmf_tgt_poll_group_000", 00:13:48.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:48.220 "listen_address": { 00:13:48.220 "trtype": "RDMA", 00:13:48.220 "adrfam": "IPv4", 00:13:48.220 "traddr": "192.168.100.8", 00:13:48.220 "trsvcid": "4420" 00:13:48.220 }, 00:13:48.220 "peer_address": { 00:13:48.220 "trtype": "RDMA", 00:13:48.220 "adrfam": "IPv4", 00:13:48.220 "traddr": "192.168.100.8", 00:13:48.220 "trsvcid": "51206" 00:13:48.220 }, 00:13:48.220 "auth": { 00:13:48.220 "state": "completed", 00:13:48.220 "digest": "sha512", 00:13:48.220 "dhgroup": "ffdhe4096" 00:13:48.220 } 00:13:48.220 } 00:13:48.220 ]' 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:48.220 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.475 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:48.475 16:26:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.475 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.475 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.475 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.475 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:48.475 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:49.401 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.401 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:49.401 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.401 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.401 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.401 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.402 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.402 16:26:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.402 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.658 00:13:49.658 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.658 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.658 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.914 { 00:13:49.914 "cntlid": 127, 00:13:49.914 "qid": 0, 00:13:49.914 "state": "enabled", 00:13:49.914 "thread": "nvmf_tgt_poll_group_000", 00:13:49.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:49.914 "listen_address": { 00:13:49.914 "trtype": "RDMA", 00:13:49.914 "adrfam": "IPv4", 00:13:49.914 "traddr": "192.168.100.8", 00:13:49.914 "trsvcid": "4420" 00:13:49.914 }, 00:13:49.914 "peer_address": { 00:13:49.914 "trtype": "RDMA", 00:13:49.914 "adrfam": "IPv4", 00:13:49.914 "traddr": "192.168.100.8", 00:13:49.914 "trsvcid": "53496" 00:13:49.914 }, 00:13:49.914 "auth": { 00:13:49.914 "state": "completed", 00:13:49.914 "digest": "sha512", 00:13:49.914 "dhgroup": "ffdhe4096" 00:13:49.914 } 00:13:49.914 } 00:13:49.914 ]' 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.914 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.170 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:50.170 16:26:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:50.733 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.989 16:26:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.549 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.550 { 00:13:51.550 "cntlid": 129, 00:13:51.550 "qid": 0, 00:13:51.550 "state": "enabled", 00:13:51.550 "thread": "nvmf_tgt_poll_group_000", 00:13:51.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:51.550 "listen_address": { 00:13:51.550 "trtype": "RDMA", 00:13:51.550 "adrfam": "IPv4", 00:13:51.550 "traddr": "192.168.100.8", 00:13:51.550 "trsvcid": "4420" 00:13:51.550 }, 00:13:51.550 "peer_address": { 00:13:51.550 "trtype": "RDMA", 00:13:51.550 "adrfam": "IPv4", 00:13:51.550 "traddr": "192.168.100.8", 00:13:51.550 "trsvcid": "47639" 00:13:51.550 }, 00:13:51.550 "auth": { 00:13:51.550 "state": "completed", 00:13:51.550 "digest": "sha512", 00:13:51.550 "dhgroup": "ffdhe6144" 00:13:51.550 } 00:13:51.550 } 00:13:51.550 ]' 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.550 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.805 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:51.805 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.805 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.805 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.805 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.805 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:51.805 16:26:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.732 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.293 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.293 { 00:13:53.293 "cntlid": 131, 00:13:53.293 "qid": 0, 00:13:53.293 "state": "enabled", 00:13:53.293 "thread": "nvmf_tgt_poll_group_000", 00:13:53.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:53.293 "listen_address": { 00:13:53.293 "trtype": "RDMA", 00:13:53.293 "adrfam": "IPv4", 00:13:53.293 "traddr": "192.168.100.8", 00:13:53.293 "trsvcid": "4420" 00:13:53.293 }, 00:13:53.293 "peer_address": { 00:13:53.293 "trtype": "RDMA", 00:13:53.293 "adrfam": "IPv4", 00:13:53.293 "traddr": "192.168.100.8", 00:13:53.293 "trsvcid": "56032" 00:13:53.293 }, 00:13:53.293 "auth": { 00:13:53.293 "state": "completed", 00:13:53.293 "digest": "sha512", 00:13:53.293 "dhgroup": "ffdhe6144" 00:13:53.293 } 00:13:53.293 } 00:13:53.293 ]' 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:53.293 16:26:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.549 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:53.549 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.549 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.549 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.549 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.549 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:53.549 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:13:54.108 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.364 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:54.364 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.364 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.364 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.364 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.364 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:54.364 16:26:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.621 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.875 00:13:54.875 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.875 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.875 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.130 { 00:13:55.130 "cntlid": 133, 00:13:55.130 "qid": 0, 00:13:55.130 "state": "enabled", 00:13:55.130 "thread": "nvmf_tgt_poll_group_000", 00:13:55.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:55.130 "listen_address": { 00:13:55.130 "trtype": "RDMA", 00:13:55.130 "adrfam": "IPv4", 00:13:55.130 "traddr": "192.168.100.8", 00:13:55.130 "trsvcid": "4420" 00:13:55.130 }, 00:13:55.130 "peer_address": { 00:13:55.130 "trtype": "RDMA", 00:13:55.130 "adrfam": "IPv4", 00:13:55.130 "traddr": "192.168.100.8", 00:13:55.130 "trsvcid": "53213" 00:13:55.130 }, 00:13:55.130 "auth": { 00:13:55.130 "state": "completed", 00:13:55.130 "digest": "sha512", 00:13:55.130 "dhgroup": "ffdhe6144" 00:13:55.130 } 00:13:55.130 } 00:13:55.130 ]' 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.130 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.386 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:55.386 16:26:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:13:55.947 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.947 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:55.947 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.947 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.203 16:26:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.468 00:13:56.725 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.725 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.725 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.725 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.725 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.725 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.725 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.725 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.725 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.725 { 00:13:56.725 "cntlid": 135, 00:13:56.725 "qid": 0, 00:13:56.725 "state": "enabled", 00:13:56.725 "thread": "nvmf_tgt_poll_group_000", 00:13:56.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:56.725 "listen_address": { 00:13:56.725 "trtype": "RDMA", 00:13:56.726 "adrfam": "IPv4", 00:13:56.726 "traddr": "192.168.100.8", 00:13:56.726 "trsvcid": "4420" 00:13:56.726 }, 00:13:56.726 "peer_address": { 00:13:56.726 "trtype": "RDMA", 00:13:56.726 "adrfam": "IPv4", 00:13:56.726 "traddr": "192.168.100.8", 00:13:56.726 "trsvcid": "43829" 00:13:56.726 }, 00:13:56.726 "auth": { 00:13:56.726 "state": "completed", 00:13:56.726 "digest": "sha512", 00:13:56.726 "dhgroup": "ffdhe6144" 00:13:56.726 } 00:13:56.726 } 00:13:56.726 ]' 00:13:56.726 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.726 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:56.726 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.988 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:56.988 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.988 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.988 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.988 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.988 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:56.988 16:26:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:13:57.555 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.813 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:57.813 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.813 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.813 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.813 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:57.813 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.813 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:57.813 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.068 16:26:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.324 00:13:58.324 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.324 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.324 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.579 { 00:13:58.579 "cntlid": 137, 00:13:58.579 "qid": 0, 00:13:58.579 "state": "enabled", 00:13:58.579 "thread": "nvmf_tgt_poll_group_000", 00:13:58.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:58.579 "listen_address": { 00:13:58.579 "trtype": "RDMA", 00:13:58.579 "adrfam": "IPv4", 00:13:58.579 "traddr": "192.168.100.8", 00:13:58.579 "trsvcid": "4420" 00:13:58.579 }, 00:13:58.579 "peer_address": { 00:13:58.579 "trtype": "RDMA", 00:13:58.579 "adrfam": "IPv4", 00:13:58.579 "traddr": "192.168.100.8", 00:13:58.579 "trsvcid": "47795" 00:13:58.579 }, 00:13:58.579 "auth": { 00:13:58.579 "state": "completed", 00:13:58.579 "digest": "sha512", 00:13:58.579 "dhgroup": "ffdhe8192" 00:13:58.579 } 00:13:58.579 } 00:13:58.579 ]' 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:58.579 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.834 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.834 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.834 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.834 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:58.834 16:26:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:13:59.394 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.649 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:59.649 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.649 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.649 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.649 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.649 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:59.649 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.906 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.163 00:14:00.163 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.163 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.163 16:26:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.419 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.420 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.420 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.420 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.420 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.420 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.420 { 00:14:00.420 "cntlid": 139, 00:14:00.420 "qid": 0, 00:14:00.420 "state": "enabled", 00:14:00.420 "thread": "nvmf_tgt_poll_group_000", 00:14:00.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:00.420 "listen_address": { 00:14:00.420 "trtype": "RDMA", 00:14:00.420 "adrfam": "IPv4", 00:14:00.420 "traddr": "192.168.100.8", 00:14:00.420 "trsvcid": "4420" 00:14:00.420 }, 00:14:00.420 "peer_address": { 00:14:00.420 "trtype": "RDMA", 00:14:00.420 "adrfam": "IPv4", 00:14:00.420 "traddr": "192.168.100.8", 00:14:00.420 "trsvcid": "48185" 00:14:00.420 }, 00:14:00.420 "auth": { 00:14:00.420 "state": "completed", 00:14:00.420 "digest": "sha512", 00:14:00.420 "dhgroup": "ffdhe8192" 00:14:00.420 } 00:14:00.420 } 00:14:00.420 ]' 00:14:00.420 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.420 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:00.420 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.420 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:00.420 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.678 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.678 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.678 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.678 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:14:00.678 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: --dhchap-ctrl-secret DHHC-1:02:YzIyYzU0MDEwMmRiNjZjYjkzNTcyYzIxYjQ1N2YwZmUyZTgyMWI0NzllYzY3MjJhVur3xQ==: 00:14:01.241 16:26:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.497 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.754 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.754 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.754 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.754 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.011 00:14:02.011 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.011 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.011 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.267 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.267 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.267 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.267 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.267 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.267 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.267 { 00:14:02.267 "cntlid": 141, 00:14:02.267 "qid": 0, 00:14:02.267 "state": "enabled", 00:14:02.267 "thread": "nvmf_tgt_poll_group_000", 00:14:02.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:02.267 "listen_address": { 00:14:02.267 "trtype": "RDMA", 00:14:02.267 "adrfam": "IPv4", 00:14:02.267 "traddr": "192.168.100.8", 00:14:02.268 "trsvcid": "4420" 00:14:02.268 }, 00:14:02.268 "peer_address": { 00:14:02.268 "trtype": "RDMA", 00:14:02.268 "adrfam": "IPv4", 00:14:02.268 "traddr": "192.168.100.8", 00:14:02.268 "trsvcid": "33566" 00:14:02.268 }, 00:14:02.268 "auth": { 00:14:02.268 "state": "completed", 00:14:02.268 "digest": "sha512", 00:14:02.268 "dhgroup": "ffdhe8192" 00:14:02.268 } 00:14:02.268 } 00:14:02.268 ]' 00:14:02.268 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.268 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:02.268 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.268 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:02.268 16:26:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.524 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.524 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.524 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.524 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:14:02.524 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:01:ODg5Mzc4Y2UwODg0NDIzODFkYWE4NGFjZGY2MTI4NjB3bgoI: 00:14:03.089 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.345 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:03.345 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.345 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.345 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.345 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.345 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:03.345 16:26:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:03.345 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:03.345 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.345 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:03.345 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:03.345 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:03.345 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.345 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:03.345 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.345 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.601 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.601 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:03.601 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:03.601 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:03.856 00:14:03.856 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.856 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.856 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.111 { 00:14:04.111 "cntlid": 143, 00:14:04.111 "qid": 0, 00:14:04.111 "state": "enabled", 00:14:04.111 "thread": "nvmf_tgt_poll_group_000", 00:14:04.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:04.111 "listen_address": { 00:14:04.111 "trtype": "RDMA", 00:14:04.111 "adrfam": "IPv4", 00:14:04.111 "traddr": "192.168.100.8", 00:14:04.111 "trsvcid": "4420" 00:14:04.111 }, 00:14:04.111 "peer_address": { 00:14:04.111 "trtype": "RDMA", 00:14:04.111 "adrfam": "IPv4", 00:14:04.111 "traddr": "192.168.100.8", 00:14:04.111 "trsvcid": "42895" 00:14:04.111 }, 00:14:04.111 "auth": { 00:14:04.111 "state": "completed", 00:14:04.111 "digest": "sha512", 00:14:04.111 "dhgroup": "ffdhe8192" 00:14:04.111 } 00:14:04.111 } 00:14:04.111 ]' 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:04.111 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.367 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.367 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.367 16:26:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.367 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:14:04.367 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:14:04.929 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.184 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:05.184 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.184 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.184 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.184 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:05.184 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:05.184 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:05.184 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:05.184 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:05.184 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.440 16:26:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.696 00:14:05.951 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.951 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.951 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.951 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.951 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.951 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.952 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.952 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.952 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.952 { 00:14:05.952 "cntlid": 145, 00:14:05.952 "qid": 0, 00:14:05.952 "state": "enabled", 00:14:05.952 "thread": "nvmf_tgt_poll_group_000", 00:14:05.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:05.952 "listen_address": { 00:14:05.952 "trtype": "RDMA", 00:14:05.952 "adrfam": "IPv4", 00:14:05.952 "traddr": "192.168.100.8", 00:14:05.952 "trsvcid": "4420" 00:14:05.952 }, 00:14:05.952 "peer_address": { 00:14:05.952 "trtype": "RDMA", 00:14:05.952 "adrfam": "IPv4", 00:14:05.952 "traddr": "192.168.100.8", 00:14:05.952 "trsvcid": "47824" 00:14:05.952 }, 00:14:05.952 "auth": { 00:14:05.952 "state": "completed", 00:14:05.952 "digest": "sha512", 00:14:05.952 "dhgroup": "ffdhe8192" 00:14:05.952 } 00:14:05.952 } 00:14:05.952 ]' 00:14:05.952 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.952 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.952 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.208 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:06.208 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.208 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.208 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.208 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.208 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:14:06.208 16:27:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZGM2ZTA0ODhmYTA1OGEyMThkY2VhNGY5YmUyOTNhYjU0NWUzYTc1OWQwYTMzZGZl8liWhw==: --dhchap-ctrl-secret DHHC-1:03:ZjllYWNlMWE2MjRiZTZlMmY0N2ZlNDVmNmZmZGFjYzAwYmFiYjhlODJiMGY5ZTFhODJiYTQ3ZTYyOTg3OTc1MEYFlSA=: 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:07.137 16:27:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:07.394 request: 00:14:07.394 { 00:14:07.394 "name": "nvme0", 00:14:07.394 "trtype": "rdma", 00:14:07.394 "traddr": "192.168.100.8", 00:14:07.394 "adrfam": "ipv4", 00:14:07.394 "trsvcid": "4420", 00:14:07.394 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:07.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:07.394 "prchk_reftag": false, 00:14:07.394 "prchk_guard": false, 00:14:07.394 "hdgst": false, 00:14:07.394 "ddgst": false, 00:14:07.394 "dhchap_key": "key2", 00:14:07.394 "allow_unrecognized_csi": false, 00:14:07.394 "method": "bdev_nvme_attach_controller", 00:14:07.394 "req_id": 1 00:14:07.394 } 00:14:07.394 Got JSON-RPC error response 00:14:07.394 response: 00:14:07.394 { 00:14:07.394 "code": -5, 00:14:07.394 "message": "Input/output error" 00:14:07.394 } 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:07.394 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:07.957 request: 00:14:07.957 { 00:14:07.957 "name": "nvme0", 00:14:07.957 "trtype": "rdma", 00:14:07.957 "traddr": "192.168.100.8", 00:14:07.957 "adrfam": "ipv4", 00:14:07.957 "trsvcid": "4420", 00:14:07.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:07.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:07.957 "prchk_reftag": false, 00:14:07.957 "prchk_guard": false, 00:14:07.957 "hdgst": false, 00:14:07.957 "ddgst": false, 00:14:07.957 "dhchap_key": "key1", 00:14:07.957 "dhchap_ctrlr_key": "ckey2", 00:14:07.957 "allow_unrecognized_csi": false, 00:14:07.957 "method": "bdev_nvme_attach_controller", 00:14:07.957 "req_id": 1 00:14:07.957 } 00:14:07.957 Got JSON-RPC error response 00:14:07.957 response: 00:14:07.957 { 00:14:07.957 "code": -5, 00:14:07.957 "message": "Input/output error" 00:14:07.957 } 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.957 16:27:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.521 request: 00:14:08.521 { 00:14:08.521 "name": "nvme0", 00:14:08.521 "trtype": "rdma", 00:14:08.521 "traddr": "192.168.100.8", 00:14:08.521 "adrfam": "ipv4", 00:14:08.521 "trsvcid": "4420", 00:14:08.521 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:08.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:08.521 "prchk_reftag": false, 00:14:08.521 "prchk_guard": false, 00:14:08.521 "hdgst": false, 00:14:08.521 "ddgst": false, 00:14:08.521 "dhchap_key": "key1", 00:14:08.521 "dhchap_ctrlr_key": "ckey1", 00:14:08.521 "allow_unrecognized_csi": false, 00:14:08.521 "method": "bdev_nvme_attach_controller", 00:14:08.521 "req_id": 1 00:14:08.521 } 00:14:08.521 Got JSON-RPC error response 00:14:08.521 response: 00:14:08.521 { 00:14:08.521 "code": -5, 00:14:08.521 "message": "Input/output error" 00:14:08.521 } 00:14:08.521 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:08.521 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:08.521 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:08.521 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:08.521 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:08.521 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.521 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3751327 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3751327 ']' 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3751327 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3751327 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3751327' 00:14:08.522 killing process with pid 3751327 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3751327 00:14:08.522 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3751327 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3775940 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3775940 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3775940 ']' 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3775940 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3775940 ']' 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.779 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.036 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.036 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:09.036 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:09.036 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.036 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 null0 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XjB 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.b4Y ]] 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b4Y 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vwq 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Bwt ]] 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bwt 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wOk 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.70o ]] 00:14:09.293 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.70o 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.9wL 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.294 16:27:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.225 nvme0n1 00:14:10.225 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.225 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.225 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.225 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.225 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.225 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.225 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.225 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.225 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.225 { 00:14:10.225 "cntlid": 1, 00:14:10.225 "qid": 0, 00:14:10.225 "state": "enabled", 00:14:10.225 "thread": "nvmf_tgt_poll_group_000", 00:14:10.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:10.225 "listen_address": { 00:14:10.225 "trtype": "RDMA", 00:14:10.225 "adrfam": "IPv4", 00:14:10.225 "traddr": "192.168.100.8", 00:14:10.225 "trsvcid": "4420" 00:14:10.225 }, 00:14:10.225 "peer_address": { 00:14:10.225 "trtype": "RDMA", 00:14:10.225 "adrfam": "IPv4", 00:14:10.225 "traddr": "192.168.100.8", 00:14:10.225 "trsvcid": "60824" 00:14:10.226 }, 00:14:10.226 "auth": { 00:14:10.226 "state": "completed", 00:14:10.226 "digest": "sha512", 00:14:10.226 "dhgroup": "ffdhe8192" 00:14:10.226 } 00:14:10.226 } 00:14:10.226 ]' 00:14:10.226 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.226 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:10.226 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.226 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:10.226 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.226 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.226 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.226 16:27:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.483 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:14:10.483 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:14:11.044 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.301 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:11.301 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.301 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.301 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.301 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:11.301 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.301 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.301 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.301 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:11.301 16:27:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:11.558 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:11.558 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:11.558 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:11.558 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:11.558 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.558 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:11.558 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.558 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:11.558 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.558 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.815 request: 00:14:11.815 { 00:14:11.815 "name": "nvme0", 00:14:11.815 "trtype": "rdma", 00:14:11.815 "traddr": "192.168.100.8", 00:14:11.815 "adrfam": "ipv4", 00:14:11.815 "trsvcid": "4420", 00:14:11.815 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:11.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:11.815 "prchk_reftag": false, 00:14:11.815 "prchk_guard": false, 00:14:11.815 "hdgst": false, 00:14:11.815 "ddgst": false, 00:14:11.815 "dhchap_key": "key3", 00:14:11.815 "allow_unrecognized_csi": false, 00:14:11.815 "method": "bdev_nvme_attach_controller", 00:14:11.815 "req_id": 1 00:14:11.815 } 00:14:11.815 Got JSON-RPC error response 00:14:11.815 response: 00:14:11.815 { 00:14:11.815 "code": -5, 00:14:11.815 "message": "Input/output error" 00:14:11.815 } 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.815 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.072 request: 00:14:12.072 { 00:14:12.072 "name": "nvme0", 00:14:12.072 "trtype": "rdma", 00:14:12.072 "traddr": "192.168.100.8", 00:14:12.072 "adrfam": "ipv4", 00:14:12.072 "trsvcid": "4420", 00:14:12.072 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:12.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:12.072 "prchk_reftag": false, 00:14:12.072 "prchk_guard": false, 00:14:12.072 "hdgst": false, 00:14:12.072 "ddgst": false, 00:14:12.072 "dhchap_key": "key3", 00:14:12.072 "allow_unrecognized_csi": false, 00:14:12.072 "method": "bdev_nvme_attach_controller", 00:14:12.072 "req_id": 1 00:14:12.072 } 00:14:12.072 Got JSON-RPC error response 00:14:12.072 response: 00:14:12.072 { 00:14:12.072 "code": -5, 00:14:12.072 "message": "Input/output error" 00:14:12.072 } 00:14:12.072 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:12.072 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.072 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.072 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.072 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:12.072 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:14:12.072 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:12.072 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.072 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.072 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:12.328 16:27:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:12.585 request: 00:14:12.585 { 00:14:12.585 "name": "nvme0", 00:14:12.585 "trtype": "rdma", 00:14:12.585 "traddr": "192.168.100.8", 00:14:12.585 "adrfam": "ipv4", 00:14:12.585 "trsvcid": "4420", 00:14:12.585 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:12.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:12.585 "prchk_reftag": false, 00:14:12.585 "prchk_guard": false, 00:14:12.585 "hdgst": false, 00:14:12.585 "ddgst": false, 00:14:12.585 "dhchap_key": "key0", 00:14:12.585 "dhchap_ctrlr_key": "key1", 00:14:12.585 "allow_unrecognized_csi": false, 00:14:12.585 "method": "bdev_nvme_attach_controller", 00:14:12.585 "req_id": 1 00:14:12.585 } 00:14:12.585 Got JSON-RPC error response 00:14:12.585 response: 00:14:12.585 { 00:14:12.585 "code": -5, 00:14:12.585 "message": "Input/output error" 00:14:12.585 } 00:14:12.585 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:12.585 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.585 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.585 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.585 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:14:12.585 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:12.585 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:12.841 nvme0n1 00:14:12.841 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:14:12.841 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:14:12.841 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.097 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.097 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.097 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.353 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:14:13.353 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.353 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.353 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.353 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:13.353 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:13.354 16:27:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:13.959 nvme0n1 00:14:13.959 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:13.959 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:13.959 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.215 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.215 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:14.216 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.216 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.216 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.216 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:14.216 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:14.216 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.216 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.216 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:14:14.216 16:27:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: --dhchap-ctrl-secret DHHC-1:03:ZTdiMjg3ODQwNzYzNGM2NWJmODRjMmYwNDg2MjZmZTRlNzc5YzFlZGU5OTYwYjVjMzEwMWZjMDA2YzZmY2UzM8osO2k=: 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:15.144 16:27:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:15.705 request: 00:14:15.705 { 00:14:15.705 "name": "nvme0", 00:14:15.705 "trtype": "rdma", 00:14:15.705 "traddr": "192.168.100.8", 00:14:15.705 "adrfam": "ipv4", 00:14:15.705 "trsvcid": "4420", 00:14:15.705 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:15.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:15.705 "prchk_reftag": false, 00:14:15.705 "prchk_guard": false, 00:14:15.705 "hdgst": false, 00:14:15.705 "ddgst": false, 00:14:15.705 "dhchap_key": "key1", 00:14:15.705 "allow_unrecognized_csi": false, 00:14:15.705 "method": "bdev_nvme_attach_controller", 00:14:15.705 "req_id": 1 00:14:15.705 } 00:14:15.705 Got JSON-RPC error response 00:14:15.705 response: 00:14:15.705 { 00:14:15.705 "code": -5, 00:14:15.705 "message": "Input/output error" 00:14:15.705 } 00:14:15.705 16:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:15.705 16:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:15.705 16:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:15.705 16:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:15.705 16:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:15.705 16:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:15.705 16:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:16.268 nvme0n1 00:14:16.268 16:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:16.268 16:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:16.268 16:27:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.525 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.525 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.525 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.525 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:16.525 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.525 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.525 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.525 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:16.525 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:16.525 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:16.783 nvme0n1 00:14:16.783 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:16.783 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:16.783 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.040 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.040 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.040 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: '' 2s 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:17.296 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: ]] 00:14:17.297 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZGYzNmQxYzI2ZmYyNjgzYjQ1ZjlhZjgzMzJjNmIyNDa3sjy9: 00:14:17.297 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:17.297 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:17.297 16:27:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: 2s 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: ]] 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:N2NmYzMwYzcwNTNiZDZjNWVhNjIzNGQ2NDllMjRiNjRhZjlhZjg0MTM3NWY3NDZiqRGJ0w==: 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:19.190 16:27:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:21.711 16:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:21.711 16:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:21.711 16:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:21.711 16:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:21.711 16:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:21.711 16:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:21.711 16:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:21.711 16:27:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.711 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:21.711 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.711 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.711 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.711 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:21.711 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:21.711 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:22.275 nvme0n1 00:14:22.275 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:22.275 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.275 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.275 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.275 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:22.275 16:27:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:22.531 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:22.531 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:22.531 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.788 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.788 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:22.788 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.788 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.788 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.788 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:22.788 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:23.045 16:27:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:23.609 request: 00:14:23.609 { 00:14:23.609 "name": "nvme0", 00:14:23.609 "dhchap_key": "key1", 00:14:23.609 "dhchap_ctrlr_key": "key3", 00:14:23.609 "method": "bdev_nvme_set_keys", 00:14:23.609 "req_id": 1 00:14:23.609 } 00:14:23.609 Got JSON-RPC error response 00:14:23.609 response: 00:14:23.609 { 00:14:23.609 "code": -13, 00:14:23.609 "message": "Permission denied" 00:14:23.609 } 00:14:23.609 16:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:23.609 16:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.609 16:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.609 16:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.609 16:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:23.609 16:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:23.609 16:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.865 16:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:23.865 16:27:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:24.794 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:24.794 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:24.794 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.050 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:25.050 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:25.050 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.050 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.050 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.050 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:25.050 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:25.050 16:27:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:25.616 nvme0n1 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:25.616 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:25.617 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:25.617 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:25.972 request: 00:14:25.972 { 00:14:25.972 "name": "nvme0", 00:14:25.972 "dhchap_key": "key2", 00:14:25.972 "dhchap_ctrlr_key": "key0", 00:14:25.972 "method": "bdev_nvme_set_keys", 00:14:25.972 "req_id": 1 00:14:25.972 } 00:14:25.972 Got JSON-RPC error response 00:14:25.972 response: 00:14:25.972 { 00:14:25.972 "code": -13, 00:14:25.972 "message": "Permission denied" 00:14:25.972 } 00:14:25.972 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:25.972 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:25.973 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:25.973 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:25.973 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:25.973 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:25.973 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.280 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:26.280 16:27:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:27.214 16:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:27.214 16:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:27.214 16:27:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3751511 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3751511 ']' 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3751511 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3751511 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3751511' 00:14:27.472 killing process with pid 3751511 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3751511 00:14:27.472 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3751511 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:27.730 rmmod nvme_rdma 00:14:27.730 rmmod nvme_fabrics 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3775940 ']' 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3775940 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3775940 ']' 00:14:27.730 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3775940 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3775940 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3775940' 00:14:27.988 killing process with pid 3775940 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3775940 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3775940 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.XjB /tmp/spdk.key-sha256.vwq /tmp/spdk.key-sha384.wOk /tmp/spdk.key-sha512.9wL /tmp/spdk.key-sha512.b4Y /tmp/spdk.key-sha384.Bwt /tmp/spdk.key-sha256.70o '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:14:27.988 00:14:27.988 real 2m33.335s 00:14:27.988 user 5m53.773s 00:14:27.988 sys 0m20.021s 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.988 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.988 ************************************ 00:14:27.988 END TEST nvmf_auth_target 00:14:27.988 ************************************ 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.246 ************************************ 00:14:28.246 START TEST nvmf_srq_overwhelm 00:14:28.246 ************************************ 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:14:28.246 * Looking for test storage... 00:14:28.246 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lcov --version 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:28.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.246 --rc genhtml_branch_coverage=1 00:14:28.246 --rc genhtml_function_coverage=1 00:14:28.246 --rc genhtml_legend=1 00:14:28.246 --rc geninfo_all_blocks=1 00:14:28.246 --rc geninfo_unexecuted_blocks=1 00:14:28.246 00:14:28.246 ' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:28.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.246 --rc genhtml_branch_coverage=1 00:14:28.246 --rc genhtml_function_coverage=1 00:14:28.246 --rc genhtml_legend=1 00:14:28.246 --rc geninfo_all_blocks=1 00:14:28.246 --rc geninfo_unexecuted_blocks=1 00:14:28.246 00:14:28.246 ' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:28.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.246 --rc genhtml_branch_coverage=1 00:14:28.246 --rc genhtml_function_coverage=1 00:14:28.246 --rc genhtml_legend=1 00:14:28.246 --rc geninfo_all_blocks=1 00:14:28.246 --rc geninfo_unexecuted_blocks=1 00:14:28.246 00:14:28.246 ' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:28.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.246 --rc genhtml_branch_coverage=1 00:14:28.246 --rc genhtml_function_coverage=1 00:14:28.246 --rc genhtml_legend=1 00:14:28.246 --rc geninfo_all_blocks=1 00:14:28.246 --rc geninfo_unexecuted_blocks=1 00:14:28.246 00:14:28.246 ' 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:28.246 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.247 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.247 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:14:28.504 16:27:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:33.757 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:33.758 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:33.758 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:33.758 Found net devices under 0000:18:00.0: mlx_0_0 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:33.758 Found net devices under 0000:18:00.1: mlx_0_1 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:33.758 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.015 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:34.016 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:34.016 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:34.016 altname enp24s0f0np0 00:14:34.016 altname ens785f0np0 00:14:34.016 inet 192.168.100.8/24 scope global mlx_0_0 00:14:34.016 valid_lft forever preferred_lft forever 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:34.016 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:34.016 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:34.016 altname enp24s0f1np1 00:14:34.016 altname ens785f1np1 00:14:34.016 inet 192.168.100.9/24 scope global mlx_0_1 00:14:34.016 valid_lft forever preferred_lft forever 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:34.016 192.168.100.9' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:34.016 192.168.100.9' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:34.016 192.168.100.9' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=3783347 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 3783347 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 3783347 ']' 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.016 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:34.016 [2024-12-06 16:27:28.690605] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:14:34.016 [2024-12-06 16:27:28.690648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.274 [2024-12-06 16:27:28.749055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.274 [2024-12-06 16:27:28.789106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.274 [2024-12-06 16:27:28.789144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.274 [2024-12-06 16:27:28.789150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.274 [2024-12-06 16:27:28.789155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.274 [2024-12-06 16:27:28.789160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.274 [2024-12-06 16:27:28.790381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.274 [2024-12-06 16:27:28.790400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.274 [2024-12-06 16:27:28.790487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.274 [2024-12-06 16:27:28.790489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:34.274 [2024-12-06 16:27:28.945037] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d200c0/0x1d245b0) succeed. 00:14:34.274 [2024-12-06 16:27:28.953199] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d21750/0x1d65c50) succeed. 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.274 16:27:28 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:14:34.532 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:34.532 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:14:34.532 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.532 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:34.533 Malloc0 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:34.533 [2024-12-06 16:27:29.051871] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.533 16:27:29 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:35.469 Malloc1 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.469 16:27:30 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:36.404 Malloc2 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.404 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:36.663 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.663 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:14:36.663 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.663 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:36.663 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.663 16:27:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:37.598 Malloc3 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.598 16:27:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:38.534 Malloc4 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.534 16:27:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 Malloc5 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.911 16:27:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:14:40.847 16:27:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:14:40.847 16:27:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:40.847 16:27:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:14:40.847 16:27:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:40.847 16:27:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:14:40.847 16:27:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:40.847 16:27:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:40.847 16:27:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:14:40.847 [global] 00:14:40.847 thread=1 00:14:40.847 invalidate=1 00:14:40.847 rw=read 00:14:40.847 time_based=1 00:14:40.847 runtime=10 00:14:40.847 ioengine=libaio 00:14:40.847 direct=1 00:14:40.847 bs=1048576 00:14:40.847 iodepth=128 00:14:40.847 norandommap=1 00:14:40.847 numjobs=13 00:14:40.847 00:14:40.847 [job0] 00:14:40.847 filename=/dev/nvme0n1 00:14:40.847 [job1] 00:14:40.847 filename=/dev/nvme1n1 00:14:40.847 [job2] 00:14:40.847 filename=/dev/nvme2n1 00:14:40.847 [job3] 00:14:40.847 filename=/dev/nvme3n1 00:14:40.847 [job4] 00:14:40.847 filename=/dev/nvme4n1 00:14:40.847 [job5] 00:14:40.847 filename=/dev/nvme5n1 00:14:40.847 Could not set queue depth (nvme0n1) 00:14:40.847 Could not set queue depth (nvme1n1) 00:14:40.847 Could not set queue depth (nvme2n1) 00:14:40.847 Could not set queue depth (nvme3n1) 00:14:40.847 Could not set queue depth (nvme4n1) 00:14:40.847 Could not set queue depth (nvme5n1) 00:14:41.104 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:14:41.104 ... 00:14:41.104 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:14:41.104 ... 00:14:41.104 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:14:41.104 ... 00:14:41.104 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:14:41.104 ... 00:14:41.104 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:14:41.104 ... 00:14:41.104 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:14:41.104 ... 00:14:41.104 fio-3.35 00:14:41.104 Starting 78 threads 00:14:55.992 00:14:55.992 job0: (groupid=0, jobs=1): err= 0: pid=3784837: Fri Dec 6 16:27:48 2024 00:14:55.992 read: IOPS=4, BW=4632KiB/s (4743kB/s)(57.0MiB/12602msec) 00:14:55.992 slat (usec): min=773, max=2146.9k, avg=184399.19, stdev=577179.52 00:14:55.992 clat (msec): min=2090, max=12600, avg=11438.79, stdev=2268.07 00:14:55.992 lat (msec): min=4212, max=12601, avg=11623.19, stdev=1890.25 00:14:55.992 clat percentiles (msec): 00:14:55.992 | 1.00th=[ 2089], 5.00th=[ 6342], 10.00th=[ 8490], 20.00th=[10671], 00:14:55.992 | 30.00th=[12416], 40.00th=[12416], 50.00th=[12550], 60.00th=[12550], 00:14:55.992 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:14:55.992 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:14:55.992 | 99.99th=[12550] 00:14:55.992 lat (msec) : >=2000=100.00% 00:14:55.992 cpu : usr=0.00%, sys=0.52%, ctx=94, majf=0, minf=14593 00:14:55.992 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:14:55.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.992 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:14:55.992 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.992 job0: (groupid=0, jobs=1): err= 0: pid=3784838: Fri Dec 6 16:27:48 2024 00:14:55.992 read: IOPS=1, BW=1972KiB/s (2019kB/s)(20.0MiB/10387msec) 00:14:55.992 slat (usec): min=1674, max=2122.1k, avg=516825.73, stdev=890953.13 00:14:55.992 clat (msec): min=50, max=10384, avg=6330.60, stdev=3429.52 00:14:55.992 lat (msec): min=2106, max=10386, avg=6847.43, stdev=3204.72 00:14:55.992 clat percentiles (msec): 00:14:55.992 | 1.00th=[ 51], 5.00th=[ 51], 10.00th=[ 2106], 20.00th=[ 2140], 00:14:55.992 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 6409], 60.00th=[ 8557], 00:14:55.992 | 70.00th=[ 8557], 80.00th=[10268], 90.00th=[10402], 95.00th=[10402], 00:14:55.992 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:55.992 | 99.99th=[10402] 00:14:55.992 lat (msec) : 100=5.00%, >=2000=95.00% 00:14:55.992 cpu : usr=0.00%, sys=0.15%, ctx=57, majf=0, minf=5121 00:14:55.992 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:14:55.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.992 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:55.992 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.992 job0: (groupid=0, jobs=1): err= 0: pid=3784839: Fri Dec 6 16:27:48 2024 00:14:55.992 read: IOPS=53, BW=54.0MiB/s (56.6MB/s)(563MiB/10435msec) 00:14:55.992 slat (usec): min=69, max=2094.3k, avg=18423.28, stdev=150823.12 00:14:55.992 clat (msec): min=59, max=6936, avg=2226.42, stdev=2339.70 00:14:55.992 lat (msec): min=571, max=6940, avg=2244.85, stdev=2343.13 00:14:55.992 clat percentiles (msec): 00:14:55.992 | 1.00th=[ 584], 5.00th=[ 684], 10.00th=[ 768], 20.00th=[ 860], 00:14:55.992 | 30.00th=[ 894], 40.00th=[ 919], 50.00th=[ 986], 60.00th=[ 1133], 00:14:55.992 | 70.00th=[ 1217], 80.00th=[ 6477], 90.00th=[ 6678], 95.00th=[ 6745], 00:14:55.992 | 99.00th=[ 6879], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:14:55.992 | 99.99th=[ 6946] 00:14:55.992 bw ( KiB/s): min= 2048, max=233472, per=2.96%, avg=98986.67, stdev=81438.59, samples=9 00:14:55.992 iops : min= 2, max= 228, avg=96.67, stdev=79.53, samples=9 00:14:55.992 lat (msec) : 100=0.18%, 750=8.88%, 1000=42.27%, 2000=24.33%, >=2000=24.33% 00:14:55.992 cpu : usr=0.00%, sys=0.98%, ctx=1280, majf=0, minf=32769 00:14:55.992 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.8% 00:14:55.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.992 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:55.992 issued rwts: total=563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.992 job0: (groupid=0, jobs=1): err= 0: pid=3784840: Fri Dec 6 16:27:48 2024 00:14:55.992 read: IOPS=14, BW=15.0MiB/s (15.7MB/s)(157MiB/10499msec) 00:14:55.992 slat (usec): min=1038, max=2095.3k, avg=66488.40, stdev=322457.15 00:14:55.992 clat (msec): min=59, max=10328, avg=7076.34, stdev=2127.91 00:14:55.992 lat (msec): min=2099, max=10352, avg=7142.83, stdev=2067.72 00:14:55.992 clat percentiles (msec): 00:14:55.992 | 1.00th=[ 2106], 5.00th=[ 2735], 10.00th=[ 2869], 20.00th=[ 6141], 00:14:55.992 | 30.00th=[ 7684], 40.00th=[ 7819], 50.00th=[ 7953], 60.00th=[ 8087], 00:14:55.992 | 70.00th=[ 8221], 80.00th=[ 8288], 90.00th=[ 8423], 95.00th=[ 8557], 00:14:55.992 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:14:55.992 | 99.99th=[10268] 00:14:55.992 bw ( KiB/s): min= 4096, max=36864, per=0.44%, avg=14848.00, stdev=14897.94, samples=4 00:14:55.992 iops : min= 4, max= 36, avg=14.50, stdev=14.55, samples=4 00:14:55.992 lat (msec) : 100=0.64%, >=2000=99.36% 00:14:55.992 cpu : usr=0.00%, sys=0.93%, ctx=443, majf=0, minf=32769 00:14:55.992 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=5.1%, 16=10.2%, 32=20.4%, >=64=59.9% 00:14:55.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.992 complete : 0=0.0%, 4=96.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.2% 00:14:55.992 issued rwts: total=157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.992 job0: (groupid=0, jobs=1): err= 0: pid=3784841: Fri Dec 6 16:27:48 2024 00:14:55.992 read: IOPS=12, BW=12.1MiB/s (12.7MB/s)(151MiB/12488msec) 00:14:55.992 slat (usec): min=1631, max=2146.9k, avg=68883.29, stdev=340526.04 00:14:55.992 clat (msec): min=1403, max=12028, avg=9750.24, stdev=3402.21 00:14:55.992 lat (msec): min=1416, max=12039, avg=9819.12, stdev=3342.49 00:14:55.992 clat percentiles (msec): 00:14:55.992 | 1.00th=[ 1401], 5.00th=[ 1469], 10.00th=[ 1569], 20.00th=[ 8557], 00:14:55.992 | 30.00th=[10939], 40.00th=[11073], 50.00th=[11208], 60.00th=[11342], 00:14:55.992 | 70.00th=[11476], 80.00th=[11610], 90.00th=[11879], 95.00th=[11879], 00:14:55.992 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:14:55.992 | 99.99th=[12013] 00:14:55.992 bw ( KiB/s): min= 1582, max=32768, per=0.24%, avg=8114.33, stdev=12335.98, samples=6 00:14:55.992 iops : min= 1, max= 32, avg= 7.83, stdev=12.11, samples=6 00:14:55.992 lat (msec) : 2000=10.60%, >=2000=89.40% 00:14:55.992 cpu : usr=0.01%, sys=0.53%, ctx=570, majf=0, minf=32769 00:14:55.992 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=5.3%, 16=10.6%, 32=21.2%, >=64=58.3% 00:14:55.992 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.992 complete : 0=0.0%, 4=96.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.0% 00:14:55.992 issued rwts: total=151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.992 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.992 job0: (groupid=0, jobs=1): err= 0: pid=3784842: Fri Dec 6 16:27:48 2024 00:14:55.992 read: IOPS=85, BW=85.7MiB/s (89.9MB/s)(902MiB/10526msec) 00:14:55.992 slat (usec): min=50, max=2032.7k, avg=11607.68, stdev=85350.36 00:14:55.992 clat (msec): min=50, max=4707, avg=1415.90, stdev=1097.84 00:14:55.992 lat (msec): min=616, max=4709, avg=1427.50, stdev=1100.70 00:14:55.992 clat percentiles (msec): 00:14:55.992 | 1.00th=[ 617], 5.00th=[ 642], 10.00th=[ 642], 20.00th=[ 693], 00:14:55.992 | 30.00th=[ 751], 40.00th=[ 818], 50.00th=[ 919], 60.00th=[ 1099], 00:14:55.992 | 70.00th=[ 1536], 80.00th=[ 1687], 90.00th=[ 3004], 95.00th=[ 4463], 00:14:55.992 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:14:55.993 | 99.99th=[ 4732] 00:14:55.993 bw ( KiB/s): min=36864, max=208896, per=3.65%, avg=121955.54, stdev=58451.63, samples=13 00:14:55.993 iops : min= 36, max= 204, avg=119.08, stdev=57.08, samples=13 00:14:55.993 lat (msec) : 100=0.11%, 750=30.04%, 1000=28.49%, 2000=24.83%, >=2000=16.52% 00:14:55.993 cpu : usr=0.01%, sys=1.98%, ctx=1059, majf=0, minf=32769 00:14:55.993 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:14:55.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.993 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.993 issued rwts: total=902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.993 job0: (groupid=0, jobs=1): err= 0: pid=3784843: Fri Dec 6 16:27:48 2024 00:14:55.993 read: IOPS=81, BW=81.4MiB/s (85.3MB/s)(1014MiB/12461msec) 00:14:55.993 slat (usec): min=30, max=2096.2k, avg=10224.47, stdev=93327.45 00:14:55.993 clat (msec): min=392, max=6985, avg=1493.42, stdev=1949.95 00:14:55.993 lat (msec): min=395, max=6986, avg=1503.64, stdev=1955.54 00:14:55.993 clat percentiles (msec): 00:14:55.993 | 1.00th=[ 401], 5.00th=[ 477], 10.00th=[ 542], 20.00th=[ 575], 00:14:55.993 | 30.00th=[ 651], 40.00th=[ 735], 50.00th=[ 802], 60.00th=[ 852], 00:14:55.993 | 70.00th=[ 911], 80.00th=[ 1070], 90.00th=[ 6544], 95.00th=[ 6745], 00:14:55.993 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 7013], 00:14:55.993 | 99.99th=[ 7013] 00:14:55.993 bw ( KiB/s): min= 1582, max=262144, per=4.18%, avg=139656.62, stdev=88483.78, samples=13 00:14:55.993 iops : min= 1, max= 256, avg=136.15, stdev=86.60, samples=13 00:14:55.993 lat (msec) : 500=5.42%, 750=36.88%, 1000=35.21%, 2000=9.47%, >=2000=13.02% 00:14:55.993 cpu : usr=0.01%, sys=0.95%, ctx=1904, majf=0, minf=32769 00:14:55.993 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:14:55.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.993 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.993 issued rwts: total=1014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.993 job0: (groupid=0, jobs=1): err= 0: pid=3784844: Fri Dec 6 16:27:48 2024 00:14:55.993 read: IOPS=3, BW=3619KiB/s (3706kB/s)(37.0MiB/10469msec) 00:14:55.993 slat (usec): min=869, max=2131.1k, avg=281458.76, stdev=700121.89 00:14:55.993 clat (msec): min=54, max=10465, avg=8217.90, stdev=3161.61 00:14:55.993 lat (msec): min=2113, max=10468, avg=8499.36, stdev=2864.20 00:14:55.993 clat percentiles (msec): 00:14:55.993 | 1.00th=[ 55], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 4279], 00:14:55.993 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10402], 60.00th=[10402], 00:14:55.993 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:14:55.993 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:55.993 | 99.99th=[10402] 00:14:55.993 lat (msec) : 100=2.70%, >=2000=97.30% 00:14:55.993 cpu : usr=0.00%, sys=0.39%, ctx=62, majf=0, minf=9473 00:14:55.993 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:14:55.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.993 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:14:55.993 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.993 job0: (groupid=0, jobs=1): err= 0: pid=3784846: Fri Dec 6 16:27:48 2024 00:14:55.993 read: IOPS=107, BW=107MiB/s (112MB/s)(1125MiB/10509msec) 00:14:55.993 slat (usec): min=32, max=2063.9k, avg=9292.93, stdev=86927.57 00:14:55.993 clat (msec): min=50, max=4919, avg=1147.66, stdev=1208.31 00:14:55.993 lat (msec): min=307, max=4921, avg=1156.95, stdev=1211.41 00:14:55.993 clat percentiles (msec): 00:14:55.993 | 1.00th=[ 309], 5.00th=[ 481], 10.00th=[ 542], 20.00th=[ 617], 00:14:55.993 | 30.00th=[ 659], 40.00th=[ 709], 50.00th=[ 726], 60.00th=[ 776], 00:14:55.993 | 70.00th=[ 852], 80.00th=[ 936], 90.00th=[ 4178], 95.00th=[ 4530], 00:14:55.993 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:14:55.993 | 99.99th=[ 4933] 00:14:55.993 bw ( KiB/s): min= 4096, max=317440, per=4.70%, avg=157065.85, stdev=74695.33, samples=13 00:14:55.993 iops : min= 4, max= 310, avg=153.38, stdev=72.94, samples=13 00:14:55.993 lat (msec) : 100=0.09%, 500=5.60%, 750=49.33%, 1000=32.98%, 2000=0.53% 00:14:55.993 lat (msec) : >=2000=11.47% 00:14:55.993 cpu : usr=0.00%, sys=1.59%, ctx=1245, majf=0, minf=32769 00:14:55.993 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:14:55.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.993 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.993 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.993 job0: (groupid=0, jobs=1): err= 0: pid=3784847: Fri Dec 6 16:27:48 2024 00:14:55.993 read: IOPS=150, BW=151MiB/s (158MB/s)(1877MiB/12468msec) 00:14:55.993 slat (usec): min=39, max=2108.1k, avg=5519.46, stdev=68662.32 00:14:55.993 clat (msec): min=247, max=6886, avg=823.21, stdev=1572.73 00:14:55.993 lat (msec): min=248, max=6888, avg=828.73, stdev=1577.97 00:14:55.993 clat percentiles (msec): 00:14:55.993 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 257], 20.00th=[ 268], 00:14:55.993 | 30.00th=[ 300], 40.00th=[ 342], 50.00th=[ 376], 60.00th=[ 401], 00:14:55.993 | 70.00th=[ 451], 80.00th=[ 518], 90.00th=[ 860], 95.00th=[ 6544], 00:14:55.993 | 99.00th=[ 6812], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:14:55.993 | 99.99th=[ 6879] 00:14:55.993 bw ( KiB/s): min= 1582, max=466011, per=7.66%, avg=255899.43, stdev=166407.35, samples=14 00:14:55.993 iops : min= 1, max= 455, avg=249.79, stdev=162.68, samples=14 00:14:55.993 lat (msec) : 250=2.61%, 500=73.26%, 750=11.51%, 1000=5.70%, >=2000=6.93% 00:14:55.993 cpu : usr=0.04%, sys=1.33%, ctx=2241, majf=0, minf=32769 00:14:55.993 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:14:55.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.993 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.993 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.993 job0: (groupid=0, jobs=1): err= 0: pid=3784848: Fri Dec 6 16:27:48 2024 00:14:55.993 read: IOPS=16, BW=16.2MiB/s (17.0MB/s)(202MiB/12481msec) 00:14:55.993 slat (usec): min=788, max=2172.5k, avg=51454.22, stdev=272261.45 00:14:55.993 clat (msec): min=1670, max=12446, avg=7297.46, stdev=4180.12 00:14:55.993 lat (msec): min=1684, max=12480, avg=7348.92, stdev=4168.04 00:14:55.993 clat percentiles (msec): 00:14:55.993 | 1.00th=[ 1687], 5.00th=[ 1703], 10.00th=[ 1720], 20.00th=[ 1737], 00:14:55.993 | 30.00th=[ 1754], 40.00th=[ 7282], 50.00th=[10000], 60.00th=[10268], 00:14:55.993 | 70.00th=[10537], 80.00th=[10939], 90.00th=[11208], 95.00th=[11342], 00:14:55.993 | 99.00th=[11476], 99.50th=[11476], 99.90th=[12416], 99.95th=[12416], 00:14:55.993 | 99.99th=[12416] 00:14:55.993 bw ( KiB/s): min= 1582, max=79712, per=0.65%, avg=21853.43, stdev=28148.82, samples=7 00:14:55.993 iops : min= 1, max= 77, avg=21.14, stdev=27.27, samples=7 00:14:55.993 lat (msec) : 2000=32.67%, >=2000=67.33% 00:14:55.993 cpu : usr=0.02%, sys=0.95%, ctx=391, majf=0, minf=32769 00:14:55.993 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=7.9%, 32=15.8%, >=64=68.8% 00:14:55.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.993 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:14:55.993 issued rwts: total=202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.993 job0: (groupid=0, jobs=1): err= 0: pid=3784849: Fri Dec 6 16:27:48 2024 00:14:55.993 read: IOPS=61, BW=61.7MiB/s (64.7MB/s)(640MiB/10378msec) 00:14:55.993 slat (usec): min=50, max=2051.9k, avg=16125.88, stdev=138692.18 00:14:55.993 clat (msec): min=54, max=6894, avg=1994.96, stdev=2094.42 00:14:55.993 lat (msec): min=505, max=6905, avg=2011.08, stdev=2099.48 00:14:55.993 clat percentiles (msec): 00:14:55.993 | 1.00th=[ 527], 5.00th=[ 542], 10.00th=[ 659], 20.00th=[ 693], 00:14:55.993 | 30.00th=[ 776], 40.00th=[ 927], 50.00th=[ 1028], 60.00th=[ 1116], 00:14:55.993 | 70.00th=[ 1167], 80.00th=[ 2802], 90.00th=[ 6544], 95.00th=[ 6678], 00:14:55.993 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:14:55.993 | 99.99th=[ 6879] 00:14:55.993 bw ( KiB/s): min=10219, max=194560, per=3.14%, avg=104839.50, stdev=63566.71, samples=10 00:14:55.993 iops : min= 9, max= 190, avg=102.20, stdev=62.28, samples=10 00:14:55.993 lat (msec) : 100=0.16%, 750=27.34%, 1000=19.69%, 2000=27.50%, >=2000=25.31% 00:14:55.993 cpu : usr=0.01%, sys=1.19%, ctx=1291, majf=0, minf=32769 00:14:55.993 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:14:55.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.993 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:55.993 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.993 job0: (groupid=0, jobs=1): err= 0: pid=3784850: Fri Dec 6 16:27:48 2024 00:14:55.993 read: IOPS=21, BW=21.3MiB/s (22.3MB/s)(223MiB/10479msec) 00:14:55.993 slat (usec): min=91, max=2126.6k, avg=44844.10, stdev=265646.07 00:14:55.993 clat (msec): min=477, max=9391, avg=1758.05, stdev=2398.08 00:14:55.993 lat (msec): min=479, max=9394, avg=1802.90, stdev=2450.73 00:14:55.993 clat percentiles (msec): 00:14:55.993 | 1.00th=[ 485], 5.00th=[ 535], 10.00th=[ 609], 20.00th=[ 785], 00:14:55.993 | 30.00th=[ 869], 40.00th=[ 877], 50.00th=[ 911], 60.00th=[ 1011], 00:14:55.993 | 70.00th=[ 1116], 80.00th=[ 1234], 90.00th=[ 5537], 95.00th=[ 9329], 00:14:55.993 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:14:55.993 | 99.99th=[ 9329] 00:14:55.993 bw ( KiB/s): min=45056, max=126083, per=2.56%, avg=85569.50, stdev=57294.74, samples=2 00:14:55.993 iops : min= 44, max= 123, avg=83.50, stdev=55.86, samples=2 00:14:55.993 lat (msec) : 500=3.59%, 750=15.70%, 1000=39.01%, 2000=29.15%, >=2000=12.56% 00:14:55.993 cpu : usr=0.01%, sys=1.08%, ctx=268, majf=0, minf=32769 00:14:55.993 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.3%, >=64=71.7% 00:14:55.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.993 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:14:55.993 issued rwts: total=223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.994 job1: (groupid=0, jobs=1): err= 0: pid=3784871: Fri Dec 6 16:27:48 2024 00:14:55.994 read: IOPS=2, BW=2461KiB/s (2520kB/s)(25.0MiB/10402msec) 00:14:55.994 slat (usec): min=730, max=2129.9k, avg=413689.01, stdev=821993.18 00:14:55.994 clat (msec): min=59, max=10400, avg=7799.52, stdev=3266.46 00:14:55.994 lat (msec): min=2112, max=10401, avg=8213.21, stdev=2877.04 00:14:55.994 clat percentiles (msec): 00:14:55.994 | 1.00th=[ 60], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 4279], 00:14:55.994 | 30.00th=[ 6477], 40.00th=[ 6477], 50.00th=[10268], 60.00th=[10268], 00:14:55.994 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:14:55.994 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:55.994 | 99.99th=[10402] 00:14:55.994 lat (msec) : 100=4.00%, >=2000=96.00% 00:14:55.994 cpu : usr=0.00%, sys=0.19%, ctx=68, majf=0, minf=6401 00:14:55.994 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:14:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.994 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:55.994 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.994 job1: (groupid=0, jobs=1): err= 0: pid=3784872: Fri Dec 6 16:27:48 2024 00:14:55.994 read: IOPS=3, BW=3263KiB/s (3342kB/s)(40.0MiB/12551msec) 00:14:55.994 slat (usec): min=562, max=2124.5k, avg=261026.94, stdev=677041.98 00:14:55.994 clat (msec): min=2109, max=12547, avg=10868.23, stdev=2954.02 00:14:55.994 lat (msec): min=4204, max=12550, avg=11129.26, stdev=2600.35 00:14:55.994 clat percentiles (msec): 00:14:55.994 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 8490], 00:14:55.994 | 30.00th=[10671], 40.00th=[12416], 50.00th=[12416], 60.00th=[12550], 00:14:55.994 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:14:55.994 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:14:55.994 | 99.99th=[12550] 00:14:55.994 lat (msec) : >=2000=100.00% 00:14:55.994 cpu : usr=0.00%, sys=0.40%, ctx=71, majf=0, minf=10241 00:14:55.994 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:14:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.994 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:14:55.994 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.994 job1: (groupid=0, jobs=1): err= 0: pid=3784873: Fri Dec 6 16:27:48 2024 00:14:55.994 read: IOPS=7, BW=8037KiB/s (8230kB/s)(99.0MiB/12613msec) 00:14:55.994 slat (usec): min=464, max=2094.8k, avg=106073.27, stdev=440701.49 00:14:55.994 clat (msec): min=2110, max=12609, avg=10634.09, stdev=2879.24 00:14:55.994 lat (msec): min=4179, max=12612, avg=10740.17, stdev=2752.69 00:14:55.994 clat percentiles (msec): 00:14:55.994 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:14:55.994 | 30.00th=[10671], 40.00th=[12416], 50.00th=[12416], 60.00th=[12550], 00:14:55.994 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:14:55.994 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:14:55.994 | 99.99th=[12550] 00:14:55.994 lat (msec) : >=2000=100.00% 00:14:55.994 cpu : usr=0.01%, sys=0.81%, ctx=93, majf=0, minf=25345 00:14:55.994 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.1%, 16=16.2%, 32=32.3%, >=64=36.4% 00:14:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.994 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:14:55.994 issued rwts: total=99,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.994 job1: (groupid=0, jobs=1): err= 0: pid=3784874: Fri Dec 6 16:27:48 2024 00:14:55.994 read: IOPS=92, BW=92.7MiB/s (97.2MB/s)(938MiB/10118msec) 00:14:55.994 slat (usec): min=45, max=2004.6k, avg=10670.10, stdev=86456.70 00:14:55.994 clat (msec): min=106, max=7460, avg=1203.37, stdev=989.92 00:14:55.994 lat (msec): min=145, max=7476, avg=1214.04, stdev=997.55 00:14:55.994 clat percentiles (msec): 00:14:55.994 | 1.00th=[ 188], 5.00th=[ 558], 10.00th=[ 600], 20.00th=[ 617], 00:14:55.994 | 30.00th=[ 642], 40.00th=[ 709], 50.00th=[ 743], 60.00th=[ 835], 00:14:55.994 | 70.00th=[ 936], 80.00th=[ 1519], 90.00th=[ 2970], 95.00th=[ 3171], 00:14:55.994 | 99.00th=[ 3406], 99.50th=[ 4732], 99.90th=[ 7483], 99.95th=[ 7483], 00:14:55.994 | 99.99th=[ 7483] 00:14:55.994 bw ( KiB/s): min=18432, max=212992, per=3.82%, avg=127606.15, stdev=62625.42, samples=13 00:14:55.994 iops : min= 18, max= 208, avg=124.62, stdev=61.16, samples=13 00:14:55.994 lat (msec) : 250=1.28%, 500=2.67%, 750=47.01%, 1000=21.32%, 2000=7.78% 00:14:55.994 lat (msec) : >=2000=19.94% 00:14:55.994 cpu : usr=0.00%, sys=1.61%, ctx=974, majf=0, minf=32331 00:14:55.994 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:14:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.994 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.994 issued rwts: total=938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.994 job1: (groupid=0, jobs=1): err= 0: pid=3784875: Fri Dec 6 16:27:48 2024 00:14:55.994 read: IOPS=3, BW=3263KiB/s (3341kB/s)(33.0MiB/10357msec) 00:14:55.994 slat (usec): min=594, max=2104.5k, avg=312065.44, stdev=728408.37 00:14:55.994 clat (msec): min=58, max=10347, avg=4990.99, stdev=3150.70 00:14:55.994 lat (msec): min=2086, max=10356, avg=5303.05, stdev=3156.88 00:14:55.994 clat percentiles (msec): 00:14:55.994 | 1.00th=[ 59], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2106], 00:14:55.994 | 30.00th=[ 2140], 40.00th=[ 4245], 50.00th=[ 4245], 60.00th=[ 4329], 00:14:55.994 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[10268], 95.00th=[10268], 00:14:55.994 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:55.994 | 99.99th=[10402] 00:14:55.994 lat (msec) : 100=3.03%, >=2000=96.97% 00:14:55.994 cpu : usr=0.00%, sys=0.19%, ctx=50, majf=0, minf=8449 00:14:55.994 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:14:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.994 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:14:55.994 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.994 job1: (groupid=0, jobs=1): err= 0: pid=3784877: Fri Dec 6 16:27:48 2024 00:14:55.994 read: IOPS=3, BW=3376KiB/s (3457kB/s)(41.0MiB/12437msec) 00:14:55.994 slat (usec): min=401, max=2102.7k, avg=251872.19, stdev=660810.39 00:14:55.994 clat (msec): min=2109, max=12388, avg=9918.61, stdev=2959.72 00:14:55.994 lat (msec): min=4193, max=12436, avg=10170.48, stdev=2707.17 00:14:55.994 clat percentiles (msec): 00:14:55.994 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6409], 00:14:55.994 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[12281], 00:14:55.994 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:14:55.994 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:14:55.994 | 99.99th=[12416] 00:14:55.994 lat (msec) : >=2000=100.00% 00:14:55.994 cpu : usr=0.00%, sys=0.18%, ctx=47, majf=0, minf=10497 00:14:55.994 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:14:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.994 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:14:55.994 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.994 job1: (groupid=0, jobs=1): err= 0: pid=3784878: Fri Dec 6 16:27:48 2024 00:14:55.994 read: IOPS=9, BW=9.87MiB/s (10.4MB/s)(104MiB/10532msec) 00:14:55.994 slat (usec): min=496, max=2113.5k, avg=100691.45, stdev=429679.39 00:14:55.994 clat (msec): min=59, max=10530, avg=8574.94, stdev=2797.22 00:14:55.994 lat (msec): min=2103, max=10531, avg=8675.63, stdev=2673.46 00:14:55.994 clat percentiles (msec): 00:14:55.994 | 1.00th=[ 2106], 5.00th=[ 2140], 10.00th=[ 4279], 20.00th=[ 6409], 00:14:55.994 | 30.00th=[ 8557], 40.00th=[ 8658], 50.00th=[10402], 60.00th=[10402], 00:14:55.994 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:14:55.994 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:14:55.994 | 99.99th=[10537] 00:14:55.994 lat (msec) : 100=0.96%, >=2000=99.04% 00:14:55.994 cpu : usr=0.00%, sys=0.97%, ctx=98, majf=0, minf=26625 00:14:55.994 IO depths : 1=1.0%, 2=1.9%, 4=3.8%, 8=7.7%, 16=15.4%, 32=30.8%, >=64=39.4% 00:14:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.994 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:14:55.994 issued rwts: total=104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.994 job1: (groupid=0, jobs=1): err= 0: pid=3784879: Fri Dec 6 16:27:48 2024 00:14:55.994 read: IOPS=2, BW=2357KiB/s (2413kB/s)(24.0MiB/10429msec) 00:14:55.994 slat (usec): min=1475, max=2150.9k, avg=432141.22, stdev=833501.80 00:14:55.994 clat (msec): min=56, max=10426, avg=8324.34, stdev=2792.14 00:14:55.994 lat (msec): min=2141, max=10428, avg=8756.48, stdev=2195.82 00:14:55.994 clat percentiles (msec): 00:14:55.994 | 1.00th=[ 57], 5.00th=[ 2140], 10.00th=[ 4279], 20.00th=[ 6409], 00:14:55.994 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[ 8658], 60.00th=[10268], 00:14:55.994 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:14:55.994 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:55.994 | 99.99th=[10402] 00:14:55.994 lat (msec) : 100=4.17%, >=2000=95.83% 00:14:55.994 cpu : usr=0.00%, sys=0.18%, ctx=66, majf=0, minf=6145 00:14:55.994 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:14:55.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.994 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:55.994 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.994 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.994 job1: (groupid=0, jobs=1): err= 0: pid=3784880: Fri Dec 6 16:27:48 2024 00:14:55.994 read: IOPS=2, BW=2870KiB/s (2939kB/s)(29.0MiB/10347msec) 00:14:55.994 slat (usec): min=600, max=2142.3k, avg=354626.31, stdev=770321.85 00:14:55.994 clat (msec): min=62, max=10332, avg=6510.59, stdev=3117.21 00:14:55.994 lat (msec): min=2092, max=10346, avg=6865.22, stdev=2937.18 00:14:55.994 clat percentiles (msec): 00:14:55.994 | 1.00th=[ 63], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2123], 00:14:55.994 | 30.00th=[ 4245], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[ 8557], 00:14:55.994 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10268], 00:14:55.994 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:14:55.995 | 99.99th=[10268] 00:14:55.995 lat (msec) : 100=3.45%, >=2000=96.55% 00:14:55.995 cpu : usr=0.02%, sys=0.17%, ctx=55, majf=0, minf=7425 00:14:55.995 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:14:55.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.995 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:55.995 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.995 job1: (groupid=0, jobs=1): err= 0: pid=3784881: Fri Dec 6 16:27:48 2024 00:14:55.995 read: IOPS=6, BW=6470KiB/s (6626kB/s)(66.0MiB/10445msec) 00:14:55.995 slat (usec): min=542, max=2100.9k, avg=157299.45, stdev=530254.55 00:14:55.995 clat (msec): min=62, max=10441, avg=7828.68, stdev=2914.94 00:14:55.995 lat (msec): min=2092, max=10444, avg=7985.98, stdev=2765.68 00:14:55.995 clat percentiles (msec): 00:14:55.995 | 1.00th=[ 63], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4279], 00:14:55.995 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[ 8658], 00:14:55.995 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:14:55.995 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:55.995 | 99.99th=[10402] 00:14:55.995 lat (msec) : 100=1.52%, >=2000=98.48% 00:14:55.995 cpu : usr=0.01%, sys=0.52%, ctx=69, majf=0, minf=16897 00:14:55.995 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:14:55.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.995 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:14:55.995 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.995 job1: (groupid=0, jobs=1): err= 0: pid=3784882: Fri Dec 6 16:27:48 2024 00:14:55.995 read: IOPS=1, BW=1679KiB/s (1720kB/s)(17.0MiB/10366msec) 00:14:55.995 slat (msec): min=4, max=2119, avg=606.24, stdev=935.24 00:14:55.995 clat (msec): min=59, max=10276, avg=5874.92, stdev=3184.86 00:14:55.995 lat (msec): min=2112, max=10365, avg=6481.16, stdev=2983.11 00:14:55.995 clat percentiles (msec): 00:14:55.995 | 1.00th=[ 59], 5.00th=[ 59], 10.00th=[ 2106], 20.00th=[ 2140], 00:14:55.995 | 30.00th=[ 4279], 40.00th=[ 4279], 50.00th=[ 6409], 60.00th=[ 8557], 00:14:55.995 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10268], 95.00th=[10268], 00:14:55.995 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:14:55.995 | 99.99th=[10268] 00:14:55.995 lat (msec) : 100=5.88%, >=2000=94.12% 00:14:55.995 cpu : usr=0.00%, sys=0.11%, ctx=61, majf=0, minf=4353 00:14:55.995 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:14:55.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.995 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:55.995 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.995 job1: (groupid=0, jobs=1): err= 0: pid=3784883: Fri Dec 6 16:27:48 2024 00:14:55.995 read: IOPS=12, BW=12.9MiB/s (13.5MB/s)(133MiB/10318msec) 00:14:55.995 slat (usec): min=421, max=2117.7k, avg=75225.99, stdev=341720.03 00:14:55.995 clat (msec): min=312, max=10308, avg=2539.34, stdev=3144.61 00:14:55.995 lat (msec): min=323, max=10308, avg=2614.56, stdev=3209.92 00:14:55.995 clat percentiles (msec): 00:14:55.995 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 405], 20.00th=[ 527], 00:14:55.995 | 30.00th=[ 818], 40.00th=[ 1028], 50.00th=[ 1250], 60.00th=[ 1569], 00:14:55.995 | 70.00th=[ 1838], 80.00th=[ 4111], 90.00th=[10000], 95.00th=[10268], 00:14:55.995 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:14:55.995 | 99.99th=[10268] 00:14:55.995 bw ( KiB/s): min=11592, max=11592, per=0.35%, avg=11592.00, stdev= 0.00, samples=1 00:14:55.995 iops : min= 11, max= 11, avg=11.00, stdev= 0.00, samples=1 00:14:55.995 lat (msec) : 500=19.55%, 750=8.27%, 1000=12.03%, 2000=35.34%, >=2000=24.81% 00:14:55.995 cpu : usr=0.01%, sys=0.80%, ctx=298, majf=0, minf=32769 00:14:55.995 IO depths : 1=0.8%, 2=1.5%, 4=3.0%, 8=6.0%, 16=12.0%, 32=24.1%, >=64=52.6% 00:14:55.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.995 complete : 0=0.0%, 4=85.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=14.3% 00:14:55.995 issued rwts: total=133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.995 job1: (groupid=0, jobs=1): err= 0: pid=3784884: Fri Dec 6 16:27:48 2024 00:14:55.995 read: IOPS=1, BW=1380KiB/s (1413kB/s)(14.0MiB/10386msec) 00:14:55.995 slat (msec): min=9, max=2133, avg=737.57, stdev=1004.45 00:14:55.995 clat (msec): min=59, max=10375, avg=6761.59, stdev=3427.13 00:14:55.995 lat (msec): min=2121, max=10385, avg=7499.16, stdev=2951.93 00:14:55.995 clat percentiles (msec): 00:14:55.995 | 1.00th=[ 59], 5.00th=[ 59], 10.00th=[ 2123], 20.00th=[ 4245], 00:14:55.995 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8658], 00:14:55.995 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:14:55.995 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:55.995 | 99.99th=[10402] 00:14:55.995 lat (msec) : 100=7.14%, >=2000=92.86% 00:14:55.995 cpu : usr=0.02%, sys=0.10%, ctx=58, majf=0, minf=3585 00:14:55.995 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:55.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.995 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.995 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.995 job2: (groupid=0, jobs=1): err= 0: pid=3784893: Fri Dec 6 16:27:48 2024 00:14:55.995 read: IOPS=1, BW=1671KiB/s (1711kB/s)(17.0MiB/10417msec) 00:14:55.995 slat (msec): min=6, max=2124, avg=609.01, stdev=938.45 00:14:55.995 clat (msec): min=62, max=10406, avg=6499.57, stdev=3082.52 00:14:55.995 lat (msec): min=2103, max=10416, avg=7108.58, stdev=2734.43 00:14:55.995 clat percentiles (msec): 00:14:55.995 | 1.00th=[ 63], 5.00th=[ 63], 10.00th=[ 2106], 20.00th=[ 4245], 00:14:55.995 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:14:55.995 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10402], 95.00th=[10402], 00:14:55.995 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:55.995 | 99.99th=[10402] 00:14:55.995 lat (msec) : 100=5.88%, >=2000=94.12% 00:14:55.995 cpu : usr=0.00%, sys=0.12%, ctx=58, majf=0, minf=4353 00:14:55.995 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:14:55.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.995 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:55.995 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.995 job2: (groupid=0, jobs=1): err= 0: pid=3784894: Fri Dec 6 16:27:48 2024 00:14:55.995 read: IOPS=2, BW=2265KiB/s (2319kB/s)(23.0MiB/10399msec) 00:14:55.995 slat (msec): min=5, max=2115, avg=448.99, stdev=839.92 00:14:55.995 clat (msec): min=71, max=10393, avg=5843.06, stdev=2924.44 00:14:55.995 lat (msec): min=2112, max=10398, avg=6292.04, stdev=2787.66 00:14:55.995 clat percentiles (msec): 00:14:55.995 | 1.00th=[ 72], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2140], 00:14:55.995 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:14:55.995 | 70.00th=[ 8557], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[10268], 00:14:55.995 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:55.995 | 99.99th=[10402] 00:14:55.995 lat (msec) : 100=4.35%, >=2000=95.65% 00:14:55.995 cpu : usr=0.00%, sys=0.16%, ctx=62, majf=0, minf=5889 00:14:55.995 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:14:55.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.995 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:55.995 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.995 job2: (groupid=0, jobs=1): err= 0: pid=3784895: Fri Dec 6 16:27:48 2024 00:14:55.995 read: IOPS=1, BW=1971KiB/s (2019kB/s)(20.0MiB/10389msec) 00:14:55.995 slat (msec): min=2, max=2101, avg=516.18, stdev=892.15 00:14:55.995 clat (msec): min=65, max=10386, avg=5661.47, stdev=2686.19 00:14:55.995 lat (msec): min=2098, max=10388, avg=6177.66, stdev=2542.23 00:14:55.995 clat percentiles (msec): 00:14:55.995 | 1.00th=[ 66], 5.00th=[ 66], 10.00th=[ 2106], 20.00th=[ 2140], 00:14:55.995 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6409], 00:14:55.995 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[ 8658], 00:14:55.995 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:55.995 | 99.99th=[10402] 00:14:55.995 lat (msec) : 100=5.00%, >=2000=95.00% 00:14:55.995 cpu : usr=0.00%, sys=0.13%, ctx=56, majf=0, minf=5121 00:14:55.995 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:14:55.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.995 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:55.995 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.995 job2: (groupid=0, jobs=1): err= 0: pid=3784896: Fri Dec 6 16:27:48 2024 00:14:55.995 read: IOPS=4, BW=4681KiB/s (4793kB/s)(48.0MiB/10501msec) 00:14:55.995 slat (usec): min=738, max=2110.8k, avg=217354.30, stdev=617710.83 00:14:55.995 clat (msec): min=67, max=10498, avg=9127.77, stdev=2748.18 00:14:55.995 lat (msec): min=2076, max=10500, avg=9345.12, stdev=2407.85 00:14:55.995 clat percentiles (msec): 00:14:55.995 | 1.00th=[ 68], 5.00th=[ 2106], 10.00th=[ 4279], 20.00th=[ 8557], 00:14:55.995 | 30.00th=[10402], 40.00th=[10402], 50.00th=[10402], 60.00th=[10402], 00:14:55.995 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:14:55.995 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:14:55.995 | 99.99th=[10537] 00:14:55.995 lat (msec) : 100=2.08%, >=2000=97.92% 00:14:55.995 cpu : usr=0.00%, sys=0.58%, ctx=85, majf=0, minf=12289 00:14:55.995 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:14:55.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.995 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:14:55.995 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.995 job2: (groupid=0, jobs=1): err= 0: pid=3784897: Fri Dec 6 16:27:48 2024 00:14:55.995 read: IOPS=29, BW=29.1MiB/s (30.5MB/s)(293MiB/10074msec) 00:14:55.996 slat (usec): min=46, max=2183.5k, avg=34126.38, stdev=212962.92 00:14:55.996 clat (msec): min=73, max=8898, avg=1385.56, stdev=1831.65 00:14:55.996 lat (msec): min=77, max=8900, avg=1419.69, stdev=1882.29 00:14:55.996 clat percentiles (msec): 00:14:55.996 | 1.00th=[ 117], 5.00th=[ 178], 10.00th=[ 279], 20.00th=[ 489], 00:14:55.996 | 30.00th=[ 676], 40.00th=[ 869], 50.00th=[ 885], 60.00th=[ 902], 00:14:55.996 | 70.00th=[ 919], 80.00th=[ 2299], 90.00th=[ 2534], 95.00th=[ 4933], 00:14:55.996 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:14:55.996 | 99.99th=[ 8926] 00:14:55.996 bw ( KiB/s): min=53248, max=143360, per=3.25%, avg=108509.33, stdev=48399.27, samples=3 00:14:55.996 iops : min= 52, max= 140, avg=105.67, stdev=47.08, samples=3 00:14:55.996 lat (msec) : 100=0.68%, 250=8.19%, 500=12.63%, 750=10.58%, 1000=45.73% 00:14:55.996 lat (msec) : 2000=1.71%, >=2000=20.48% 00:14:55.996 cpu : usr=0.00%, sys=0.90%, ctx=437, majf=0, minf=32769 00:14:55.996 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=10.9%, >=64=78.5% 00:14:55.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.996 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:14:55.996 issued rwts: total=293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.996 job2: (groupid=0, jobs=1): err= 0: pid=3784899: Fri Dec 6 16:27:48 2024 00:14:55.996 read: IOPS=9, BW=9768KiB/s (10.0MB/s)(100MiB/10483msec) 00:14:55.996 slat (usec): min=501, max=2079.5k, avg=104171.84, stdev=424585.26 00:14:55.996 clat (msec): min=64, max=10481, avg=7822.01, stdev=2654.87 00:14:55.996 lat (msec): min=2092, max=10482, avg=7926.18, stdev=2549.71 00:14:55.996 clat percentiles (msec): 00:14:55.996 | 1.00th=[ 65], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 6342], 00:14:55.996 | 30.00th=[ 6477], 40.00th=[ 8356], 50.00th=[ 8423], 60.00th=[ 8490], 00:14:55.996 | 70.00th=[ 8658], 80.00th=[10402], 90.00th=[10402], 95.00th=[10537], 00:14:55.996 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:14:55.996 | 99.99th=[10537] 00:14:55.996 lat (msec) : 100=1.00%, >=2000=99.00% 00:14:55.996 cpu : usr=0.00%, sys=0.77%, ctx=158, majf=0, minf=25601 00:14:55.996 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.0%, 16=16.0%, 32=32.0%, >=64=37.0% 00:14:55.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.996 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:14:55.996 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.996 job2: (groupid=0, jobs=1): err= 0: pid=3784900: Fri Dec 6 16:27:48 2024 00:14:55.996 read: IOPS=196, BW=197MiB/s (206MB/s)(2042MiB/10378msec) 00:14:55.996 slat (usec): min=36, max=2009.4k, avg=5045.53, stdev=44701.78 00:14:55.996 clat (msec): min=67, max=2542, avg=617.92, stdev=471.80 00:14:55.996 lat (msec): min=364, max=2544, avg=622.97, stdev=472.88 00:14:55.996 clat percentiles (msec): 00:14:55.996 | 1.00th=[ 368], 5.00th=[ 368], 10.00th=[ 372], 20.00th=[ 376], 00:14:55.996 | 30.00th=[ 384], 40.00th=[ 388], 50.00th=[ 468], 60.00th=[ 523], 00:14:55.996 | 70.00th=[ 625], 80.00th=[ 676], 90.00th=[ 810], 95.00th=[ 2232], 00:14:55.996 | 99.00th=[ 2467], 99.50th=[ 2534], 99.90th=[ 2534], 99.95th=[ 2534], 00:14:55.996 | 99.99th=[ 2534] 00:14:55.996 bw ( KiB/s): min=110592, max=348160, per=7.33%, avg=244856.44, stdev=84883.03, samples=16 00:14:55.996 iops : min= 108, max= 340, avg=238.94, stdev=82.99, samples=16 00:14:55.996 lat (msec) : 100=0.05%, 500=54.02%, 750=34.04%, 1000=5.68%, >=2000=6.22% 00:14:55.996 cpu : usr=0.06%, sys=1.72%, ctx=2495, majf=0, minf=32769 00:14:55.996 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:14:55.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.996 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.996 issued rwts: total=2042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.996 job2: (groupid=0, jobs=1): err= 0: pid=3784901: Fri Dec 6 16:27:48 2024 00:14:55.996 read: IOPS=27, BW=27.6MiB/s (28.9MB/s)(345MiB/12514msec) 00:14:55.996 slat (usec): min=70, max=2074.5k, avg=30140.58, stdev=217263.04 00:14:55.996 clat (msec): min=504, max=11130, avg=4368.36, stdev=4201.62 00:14:55.996 lat (msec): min=506, max=11132, avg=4398.50, stdev=4212.88 00:14:55.996 clat percentiles (msec): 00:14:55.996 | 1.00th=[ 506], 5.00th=[ 518], 10.00th=[ 550], 20.00th=[ 558], 00:14:55.996 | 30.00th=[ 600], 40.00th=[ 667], 50.00th=[ 2869], 60.00th=[ 4933], 00:14:55.996 | 70.00th=[ 6409], 80.00th=[10805], 90.00th=[10939], 95.00th=[11073], 00:14:55.996 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:14:55.996 | 99.99th=[11073] 00:14:55.996 bw ( KiB/s): min= 1582, max=227328, per=1.67%, avg=55749.75, stdev=76869.18, samples=8 00:14:55.996 iops : min= 1, max= 222, avg=54.38, stdev=75.12, samples=8 00:14:55.996 lat (msec) : 750=44.64%, 1000=4.06%, >=2000=51.30% 00:14:55.996 cpu : usr=0.03%, sys=0.82%, ctx=449, majf=0, minf=32769 00:14:55.996 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.6%, 32=9.3%, >=64=81.7% 00:14:55.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.996 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:14:55.996 issued rwts: total=345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.996 job2: (groupid=0, jobs=1): err= 0: pid=3784902: Fri Dec 6 16:27:48 2024 00:14:55.996 read: IOPS=29, BW=29.7MiB/s (31.1MB/s)(371MiB/12491msec) 00:14:55.996 slat (usec): min=406, max=2087.2k, avg=27962.30, stdev=197133.71 00:14:55.996 clat (msec): min=513, max=11048, avg=4099.43, stdev=3929.80 00:14:55.996 lat (msec): min=515, max=11050, avg=4127.40, stdev=3942.54 00:14:55.996 clat percentiles (msec): 00:14:55.996 | 1.00th=[ 518], 5.00th=[ 550], 10.00th=[ 567], 20.00th=[ 634], 00:14:55.996 | 30.00th=[ 709], 40.00th=[ 785], 50.00th=[ 3373], 60.00th=[ 4044], 00:14:55.996 | 70.00th=[ 7080], 80.00th=[ 7550], 90.00th=[10939], 95.00th=[10939], 00:14:55.996 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:14:55.996 | 99.99th=[11073] 00:14:55.996 bw ( KiB/s): min= 1582, max=251904, per=1.49%, avg=49924.60, stdev=76083.18, samples=10 00:14:55.996 iops : min= 1, max= 246, avg=48.70, stdev=74.34, samples=10 00:14:55.996 lat (msec) : 750=35.85%, 1000=12.67%, >=2000=51.48% 00:14:55.996 cpu : usr=0.01%, sys=0.66%, ctx=769, majf=0, minf=32769 00:14:55.996 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.0% 00:14:55.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.996 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:14:55.996 issued rwts: total=371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.996 job2: (groupid=0, jobs=1): err= 0: pid=3784903: Fri Dec 6 16:27:48 2024 00:14:55.996 read: IOPS=93, BW=93.7MiB/s (98.3MB/s)(946MiB/10091msec) 00:14:55.996 slat (usec): min=43, max=2152.8k, avg=10583.32, stdev=70389.93 00:14:55.996 clat (msec): min=74, max=3378, avg=1306.74, stdev=835.12 00:14:55.996 lat (msec): min=94, max=3383, avg=1317.33, stdev=836.72 00:14:55.996 clat percentiles (msec): 00:14:55.996 | 1.00th=[ 129], 5.00th=[ 592], 10.00th=[ 651], 20.00th=[ 743], 00:14:55.996 | 30.00th=[ 835], 40.00th=[ 936], 50.00th=[ 995], 60.00th=[ 1053], 00:14:55.996 | 70.00th=[ 1217], 80.00th=[ 1754], 90.00th=[ 3138], 95.00th=[ 3306], 00:14:55.996 | 99.00th=[ 3373], 99.50th=[ 3373], 99.90th=[ 3373], 99.95th=[ 3373], 00:14:55.996 | 99.99th=[ 3373] 00:14:55.996 bw ( KiB/s): min=10240, max=182272, per=3.21%, avg=107178.67, stdev=46280.57, samples=15 00:14:55.996 iops : min= 10, max= 178, avg=104.67, stdev=45.20, samples=15 00:14:55.996 lat (msec) : 100=0.32%, 250=1.59%, 500=1.06%, 750=19.03%, 1000=29.28% 00:14:55.996 lat (msec) : 2000=33.72%, >=2000=15.01% 00:14:55.996 cpu : usr=0.02%, sys=1.54%, ctx=1851, majf=0, minf=32769 00:14:55.996 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:14:55.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.996 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.996 issued rwts: total=946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.996 job2: (groupid=0, jobs=1): err= 0: pid=3784904: Fri Dec 6 16:27:48 2024 00:14:55.996 read: IOPS=24, BW=24.1MiB/s (25.2MB/s)(303MiB/12596msec) 00:14:55.996 slat (usec): min=66, max=2168.9k, avg=34589.23, stdev=242730.80 00:14:55.996 clat (msec): min=681, max=11575, avg=5157.77, stdev=4983.38 00:14:55.996 lat (msec): min=683, max=11589, avg=5192.36, stdev=4991.17 00:14:55.996 clat percentiles (msec): 00:14:55.996 | 1.00th=[ 701], 5.00th=[ 751], 10.00th=[ 810], 20.00th=[ 852], 00:14:55.996 | 30.00th=[ 860], 40.00th=[ 894], 50.00th=[ 927], 60.00th=[ 8490], 00:14:55.996 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11342], 95.00th=[11476], 00:14:55.996 | 99.00th=[11476], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:14:55.996 | 99.99th=[11610] 00:14:55.996 bw ( KiB/s): min= 1828, max=149504, per=1.54%, avg=51461.14, stdev=62197.94, samples=7 00:14:55.996 iops : min= 1, max= 146, avg=50.14, stdev=60.85, samples=7 00:14:55.996 lat (msec) : 750=5.94%, 1000=49.83%, >=2000=44.22% 00:14:55.996 cpu : usr=0.00%, sys=1.11%, ctx=326, majf=0, minf=32769 00:14:55.996 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.3%, 32=10.6%, >=64=79.2% 00:14:55.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.996 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:14:55.996 issued rwts: total=303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.996 job2: (groupid=0, jobs=1): err= 0: pid=3784905: Fri Dec 6 16:27:48 2024 00:14:55.997 read: IOPS=114, BW=114MiB/s (120MB/s)(1152MiB/10076msec) 00:14:55.997 slat (usec): min=41, max=2155.1k, avg=8676.71, stdev=63737.27 00:14:55.997 clat (msec): min=74, max=3336, avg=1082.90, stdev=769.45 00:14:55.997 lat (msec): min=95, max=3340, avg=1091.58, stdev=771.85 00:14:55.997 clat percentiles (msec): 00:14:55.997 | 1.00th=[ 136], 5.00th=[ 393], 10.00th=[ 422], 20.00th=[ 502], 00:14:55.997 | 30.00th=[ 659], 40.00th=[ 709], 50.00th=[ 911], 60.00th=[ 995], 00:14:55.997 | 70.00th=[ 1234], 80.00th=[ 1368], 90.00th=[ 2735], 95.00th=[ 3104], 00:14:55.997 | 99.00th=[ 3306], 99.50th=[ 3306], 99.90th=[ 3339], 99.95th=[ 3339], 00:14:55.997 | 99.99th=[ 3339] 00:14:55.997 bw ( KiB/s): min= 4104, max=270336, per=3.93%, avg=131186.94, stdev=67740.64, samples=16 00:14:55.997 iops : min= 4, max= 264, avg=128.06, stdev=66.17, samples=16 00:14:55.997 lat (msec) : 100=0.26%, 250=2.17%, 500=17.53%, 750=23.44%, 1000=17.01% 00:14:55.997 lat (msec) : 2000=28.56%, >=2000=11.02% 00:14:55.997 cpu : usr=0.05%, sys=2.04%, ctx=1806, majf=0, minf=32769 00:14:55.997 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:14:55.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.997 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.997 issued rwts: total=1152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.997 job2: (groupid=0, jobs=1): err= 0: pid=3784906: Fri Dec 6 16:27:48 2024 00:14:55.997 read: IOPS=5, BW=6025KiB/s (6170kB/s)(62.0MiB/10537msec) 00:14:55.997 slat (usec): min=728, max=2117.9k, avg=168771.55, stdev=550109.28 00:14:55.997 clat (msec): min=72, max=10531, avg=9035.24, stdev=2660.37 00:14:55.997 lat (msec): min=2112, max=10536, avg=9204.01, stdev=2401.78 00:14:55.997 clat percentiles (msec): 00:14:55.997 | 1.00th=[ 72], 5.00th=[ 2140], 10.00th=[ 4279], 20.00th=[ 8557], 00:14:55.997 | 30.00th=[ 8658], 40.00th=[10402], 50.00th=[10402], 60.00th=[10537], 00:14:55.997 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:14:55.997 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:14:55.997 | 99.99th=[10537] 00:14:55.997 lat (msec) : 100=1.61%, >=2000=98.39% 00:14:55.997 cpu : usr=0.00%, sys=0.67%, ctx=94, majf=0, minf=15873 00:14:55.997 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:14:55.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.997 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:14:55.997 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.997 job3: (groupid=0, jobs=1): err= 0: pid=3784911: Fri Dec 6 16:27:48 2024 00:14:55.997 read: IOPS=1, BW=1726KiB/s (1767kB/s)(21.0MiB/12460msec) 00:14:55.997 slat (usec): min=1203, max=2094.2k, avg=492394.99, stdev=866881.76 00:14:55.997 clat (msec): min=2118, max=12458, avg=9170.34, stdev=3399.75 00:14:55.997 lat (msec): min=4178, max=12459, avg=9662.73, stdev=3059.16 00:14:55.997 clat percentiles (msec): 00:14:55.997 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:14:55.997 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:14:55.997 | 70.00th=[12416], 80.00th=[12416], 90.00th=[12416], 95.00th=[12416], 00:14:55.997 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:14:55.997 | 99.99th=[12416] 00:14:55.997 lat (msec) : >=2000=100.00% 00:14:55.997 cpu : usr=0.01%, sys=0.11%, ctx=73, majf=0, minf=5377 00:14:55.997 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:14:55.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.997 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:55.997 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.997 job3: (groupid=0, jobs=1): err= 0: pid=3784912: Fri Dec 6 16:27:48 2024 00:14:55.997 read: IOPS=22, BW=22.7MiB/s (23.8MB/s)(228MiB/10030msec) 00:14:55.997 slat (usec): min=115, max=2084.1k, avg=43860.92, stdev=260545.21 00:14:55.997 clat (msec): min=28, max=9227, avg=1159.10, stdev=1648.26 00:14:55.997 lat (msec): min=36, max=9263, avg=1202.96, stdev=1733.97 00:14:55.997 clat percentiles (msec): 00:14:55.997 | 1.00th=[ 42], 5.00th=[ 59], 10.00th=[ 120], 20.00th=[ 342], 00:14:55.997 | 30.00th=[ 485], 40.00th=[ 693], 50.00th=[ 860], 60.00th=[ 1028], 00:14:55.997 | 70.00th=[ 1083], 80.00th=[ 1099], 90.00th=[ 1150], 95.00th=[ 5403], 00:14:55.997 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:14:55.997 | 99.99th=[ 9194] 00:14:55.997 bw ( KiB/s): min=55296, max=55296, per=1.66%, avg=55296.00, stdev= 0.00, samples=1 00:14:55.997 iops : min= 54, max= 54, avg=54.00, stdev= 0.00, samples=1 00:14:55.997 lat (msec) : 50=3.51%, 100=3.95%, 250=8.33%, 500=16.67%, 750=10.96% 00:14:55.997 lat (msec) : 1000=14.47%, 2000=33.33%, >=2000=8.77% 00:14:55.997 cpu : usr=0.01%, sys=0.58%, ctx=422, majf=0, minf=32769 00:14:55.997 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.0%, 32=14.0%, >=64=72.4% 00:14:55.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.997 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:14:55.997 issued rwts: total=228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.997 job3: (groupid=0, jobs=1): err= 0: pid=3784913: Fri Dec 6 16:27:48 2024 00:14:55.997 read: IOPS=54, BW=54.1MiB/s (56.7MB/s)(545MiB/10075msec) 00:14:55.997 slat (usec): min=42, max=2050.6k, avg=18355.16, stdev=150860.08 00:14:55.997 clat (msec): min=68, max=8400, avg=2277.08, stdev=2797.28 00:14:55.997 lat (msec): min=83, max=8402, avg=2295.43, stdev=2807.58 00:14:55.997 clat percentiles (msec): 00:14:55.997 | 1.00th=[ 159], 5.00th=[ 498], 10.00th=[ 498], 20.00th=[ 502], 00:14:55.997 | 30.00th=[ 506], 40.00th=[ 518], 50.00th=[ 567], 60.00th=[ 735], 00:14:55.997 | 70.00th=[ 1603], 80.00th=[ 6074], 90.00th=[ 7349], 95.00th=[ 7886], 00:14:55.997 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8423], 99.95th=[ 8423], 00:14:55.997 | 99.99th=[ 8423] 00:14:55.997 bw ( KiB/s): min= 8192, max=262144, per=2.85%, avg=95109.11, stdev=89529.29, samples=9 00:14:55.997 iops : min= 8, max= 256, avg=92.78, stdev=87.50, samples=9 00:14:55.997 lat (msec) : 100=0.37%, 250=1.65%, 500=10.83%, 750=47.34%, 1000=1.65% 00:14:55.997 lat (msec) : 2000=11.01%, >=2000=27.16% 00:14:55.997 cpu : usr=0.00%, sys=1.45%, ctx=741, majf=0, minf=32769 00:14:55.997 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:14:55.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.997 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:55.997 issued rwts: total=545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.997 job3: (groupid=0, jobs=1): err= 0: pid=3784914: Fri Dec 6 16:27:48 2024 00:14:55.997 read: IOPS=127, BW=128MiB/s (134MB/s)(1290MiB/10108msec) 00:14:55.997 slat (usec): min=38, max=1477.8k, avg=7750.38, stdev=43861.32 00:14:55.997 clat (msec): min=106, max=2650, avg=961.82, stdev=602.82 00:14:55.997 lat (msec): min=166, max=2660, avg=969.57, stdev=605.49 00:14:55.997 clat percentiles (msec): 00:14:55.997 | 1.00th=[ 309], 5.00th=[ 405], 10.00th=[ 430], 20.00th=[ 542], 00:14:55.997 | 30.00th=[ 609], 40.00th=[ 718], 50.00th=[ 793], 60.00th=[ 844], 00:14:55.997 | 70.00th=[ 961], 80.00th=[ 1083], 90.00th=[ 1804], 95.00th=[ 2567], 00:14:55.997 | 99.00th=[ 2635], 99.50th=[ 2635], 99.90th=[ 2635], 99.95th=[ 2635], 00:14:55.997 | 99.99th=[ 2635] 00:14:55.997 bw ( KiB/s): min=14336, max=315392, per=4.19%, avg=140100.06, stdev=76964.92, samples=17 00:14:55.997 iops : min= 14, max= 308, avg=136.76, stdev=75.22, samples=17 00:14:55.997 lat (msec) : 250=0.70%, 500=15.74%, 750=29.30%, 1000=24.96%, 2000=19.46% 00:14:55.997 lat (msec) : >=2000=9.84% 00:14:55.997 cpu : usr=0.03%, sys=1.64%, ctx=1411, majf=0, minf=32769 00:14:55.997 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:14:55.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.997 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.997 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.997 job3: (groupid=0, jobs=1): err= 0: pid=3784915: Fri Dec 6 16:27:48 2024 00:14:55.997 read: IOPS=38, BW=38.8MiB/s (40.7MB/s)(392MiB/10097msec) 00:14:55.997 slat (usec): min=97, max=2012.8k, avg=25512.28, stdev=145308.83 00:14:55.997 clat (msec): min=93, max=7812, avg=2559.46, stdev=1583.12 00:14:55.997 lat (msec): min=96, max=8143, avg=2584.97, stdev=1598.48 00:14:55.997 clat percentiles (msec): 00:14:55.997 | 1.00th=[ 138], 5.00th=[ 347], 10.00th=[ 642], 20.00th=[ 1217], 00:14:55.997 | 30.00th=[ 1469], 40.00th=[ 1603], 50.00th=[ 1787], 60.00th=[ 3641], 00:14:55.997 | 70.00th=[ 3842], 80.00th=[ 4396], 90.00th=[ 4530], 95.00th=[ 4665], 00:14:55.997 | 99.00th=[ 6141], 99.50th=[ 6477], 99.90th=[ 7819], 99.95th=[ 7819], 00:14:55.997 | 99.99th=[ 7819] 00:14:55.997 bw ( KiB/s): min= 2048, max=124928, per=1.80%, avg=60286.22, stdev=33741.11, samples=9 00:14:55.997 iops : min= 2, max= 122, avg=58.78, stdev=32.92, samples=9 00:14:55.997 lat (msec) : 100=0.51%, 250=2.81%, 500=3.83%, 750=4.85%, 1000=4.59% 00:14:55.997 lat (msec) : 2000=36.99%, >=2000=46.43% 00:14:55.997 cpu : usr=0.03%, sys=1.16%, ctx=811, majf=0, minf=32769 00:14:55.997 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.2%, >=64=83.9% 00:14:55.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.997 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:14:55.997 issued rwts: total=392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.997 job3: (groupid=0, jobs=1): err= 0: pid=3784916: Fri Dec 6 16:27:48 2024 00:14:55.997 read: IOPS=33, BW=33.9MiB/s (35.6MB/s)(353MiB/10398msec) 00:14:55.997 slat (usec): min=43, max=2071.3k, avg=29182.65, stdev=163620.55 00:14:55.997 clat (msec): min=94, max=7396, avg=3497.46, stdev=2140.75 00:14:55.997 lat (msec): min=1461, max=7450, avg=3526.64, stdev=2138.89 00:14:55.997 clat percentiles (msec): 00:14:55.998 | 1.00th=[ 1469], 5.00th=[ 1569], 10.00th=[ 1653], 20.00th=[ 1754], 00:14:55.998 | 30.00th=[ 1838], 40.00th=[ 1972], 50.00th=[ 2140], 60.00th=[ 2366], 00:14:55.998 | 70.00th=[ 5537], 80.00th=[ 6208], 90.00th=[ 6812], 95.00th=[ 7148], 00:14:55.998 | 99.00th=[ 7416], 99.50th=[ 7416], 99.90th=[ 7416], 99.95th=[ 7416], 00:14:55.998 | 99.99th=[ 7416] 00:14:55.998 bw ( KiB/s): min= 6144, max=69632, per=1.38%, avg=46065.60, stdev=21812.24, samples=10 00:14:55.998 iops : min= 6, max= 68, avg=44.80, stdev=21.25, samples=10 00:14:55.998 lat (msec) : 100=0.28%, 2000=41.08%, >=2000=58.64% 00:14:55.998 cpu : usr=0.00%, sys=0.89%, ctx=779, majf=0, minf=32769 00:14:55.998 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.1%, >=64=82.2% 00:14:55.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.998 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:14:55.998 issued rwts: total=353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.998 job3: (groupid=0, jobs=1): err= 0: pid=3784917: Fri Dec 6 16:27:48 2024 00:14:55.998 read: IOPS=64, BW=64.5MiB/s (67.7MB/s)(651MiB/10088msec) 00:14:55.998 slat (usec): min=29, max=1955.6k, avg=15373.62, stdev=100804.30 00:14:55.998 clat (msec): min=76, max=3795, avg=1531.85, stdev=1051.89 00:14:55.998 lat (msec): min=104, max=4649, avg=1547.22, stdev=1060.06 00:14:55.998 clat percentiles (msec): 00:14:55.998 | 1.00th=[ 136], 5.00th=[ 493], 10.00th=[ 651], 20.00th=[ 701], 00:14:55.998 | 30.00th=[ 835], 40.00th=[ 869], 50.00th=[ 1036], 60.00th=[ 1418], 00:14:55.998 | 70.00th=[ 1720], 80.00th=[ 2903], 90.00th=[ 3406], 95.00th=[ 3608], 00:14:55.998 | 99.00th=[ 3742], 99.50th=[ 3775], 99.90th=[ 3809], 99.95th=[ 3809], 00:14:55.998 | 99.99th=[ 3809] 00:14:55.998 bw ( KiB/s): min=16384, max=200704, per=2.89%, avg=96640.00, stdev=60404.30, samples=11 00:14:55.998 iops : min= 16, max= 196, avg=94.27, stdev=59.10, samples=11 00:14:55.998 lat (msec) : 100=0.15%, 250=2.15%, 500=2.92%, 750=19.20%, 1000=25.19% 00:14:55.998 lat (msec) : 2000=27.65%, >=2000=22.73% 00:14:55.998 cpu : usr=0.01%, sys=1.00%, ctx=1064, majf=0, minf=32769 00:14:55.998 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:14:55.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.998 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:55.998 issued rwts: total=651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.998 job3: (groupid=0, jobs=1): err= 0: pid=3784918: Fri Dec 6 16:27:48 2024 00:14:55.998 read: IOPS=11, BW=11.2MiB/s (11.7MB/s)(139MiB/12430msec) 00:14:55.998 slat (usec): min=428, max=2094.2k, avg=74131.12, stdev=345264.70 00:14:55.998 clat (msec): min=2124, max=12395, avg=10444.87, stdev=2207.09 00:14:55.998 lat (msec): min=3859, max=12429, avg=10519.00, stdev=2091.78 00:14:55.998 clat percentiles (msec): 00:14:55.998 | 1.00th=[ 3842], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[10671], 00:14:55.998 | 30.00th=[10805], 40.00th=[11073], 50.00th=[11208], 60.00th=[11342], 00:14:55.998 | 70.00th=[11610], 80.00th=[11745], 90.00th=[12013], 95.00th=[12147], 00:14:55.998 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:14:55.998 | 99.99th=[12416] 00:14:55.998 bw ( KiB/s): min= 1582, max=10240, per=0.18%, avg=6027.50, stdev=4867.89, samples=4 00:14:55.998 iops : min= 1, max= 10, avg= 5.75, stdev= 4.92, samples=4 00:14:55.998 lat (msec) : >=2000=100.00% 00:14:55.998 cpu : usr=0.00%, sys=0.60%, ctx=271, majf=0, minf=32769 00:14:55.998 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.8%, 16=11.5%, 32=23.0%, >=64=54.7% 00:14:55.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.998 complete : 0=0.0%, 4=92.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.7% 00:14:55.998 issued rwts: total=139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.998 job3: (groupid=0, jobs=1): err= 0: pid=3784919: Fri Dec 6 16:27:48 2024 00:14:55.998 read: IOPS=55, BW=55.8MiB/s (58.5MB/s)(562MiB/10069msec) 00:14:55.998 slat (usec): min=50, max=2092.3k, avg=17808.54, stdev=145393.51 00:14:55.998 clat (msec): min=58, max=7152, avg=2128.34, stdev=2603.88 00:14:55.998 lat (msec): min=74, max=7156, avg=2146.15, stdev=2611.49 00:14:55.998 clat percentiles (msec): 00:14:55.998 | 1.00th=[ 81], 5.00th=[ 321], 10.00th=[ 368], 20.00th=[ 393], 00:14:55.998 | 30.00th=[ 510], 40.00th=[ 625], 50.00th=[ 735], 60.00th=[ 1183], 00:14:55.998 | 70.00th=[ 1401], 80.00th=[ 6745], 90.00th=[ 7013], 95.00th=[ 7080], 00:14:55.998 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:14:55.998 | 99.99th=[ 7148] 00:14:55.998 bw ( KiB/s): min= 4096, max=270336, per=2.96%, avg=98759.11, stdev=91799.63, samples=9 00:14:55.998 iops : min= 4, max= 264, avg=96.44, stdev=89.65, samples=9 00:14:55.998 lat (msec) : 100=2.31%, 250=1.60%, 500=25.62%, 750=21.53%, 1000=5.52% 00:14:55.998 lat (msec) : 2000=19.57%, >=2000=23.84% 00:14:55.998 cpu : usr=0.01%, sys=0.97%, ctx=830, majf=0, minf=32769 00:14:55.998 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.8% 00:14:55.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.998 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:55.998 issued rwts: total=562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.998 job3: (groupid=0, jobs=1): err= 0: pid=3784920: Fri Dec 6 16:27:48 2024 00:14:55.998 read: IOPS=33, BW=34.0MiB/s (35.6MB/s)(342MiB/10070msec) 00:14:55.998 slat (usec): min=55, max=2102.6k, avg=29310.38, stdev=212077.85 00:14:55.998 clat (msec): min=44, max=8934, avg=994.02, stdev=1619.93 00:14:55.998 lat (msec): min=94, max=8987, avg=1023.33, stdev=1675.89 00:14:55.998 clat percentiles (msec): 00:14:55.998 | 1.00th=[ 99], 5.00th=[ 171], 10.00th=[ 247], 20.00th=[ 405], 00:14:55.998 | 30.00th=[ 558], 40.00th=[ 600], 50.00th=[ 600], 60.00th=[ 609], 00:14:55.998 | 70.00th=[ 651], 80.00th=[ 667], 90.00th=[ 818], 95.00th=[ 5067], 00:14:55.998 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:14:55.998 | 99.99th=[ 8926] 00:14:55.998 bw ( KiB/s): min=24576, max=198492, per=4.19%, avg=139892.00, stdev=99871.03, samples=3 00:14:55.998 iops : min= 24, max= 193, avg=136.33, stdev=97.28, samples=3 00:14:55.998 lat (msec) : 50=0.29%, 100=0.88%, 250=9.36%, 500=16.37%, 750=59.65% 00:14:55.998 lat (msec) : 1000=5.26%, >=2000=8.19% 00:14:55.998 cpu : usr=0.02%, sys=0.72%, ctx=399, majf=0, minf=32769 00:14:55.998 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.4%, >=64=81.6% 00:14:55.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.998 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:14:55.998 issued rwts: total=342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.998 job3: (groupid=0, jobs=1): err= 0: pid=3784921: Fri Dec 6 16:27:48 2024 00:14:55.998 read: IOPS=26, BW=26.1MiB/s (27.4MB/s)(263MiB/10080msec) 00:14:55.998 slat (usec): min=71, max=2081.9k, avg=38087.32, stdev=223869.54 00:14:55.998 clat (msec): min=61, max=6688, avg=2806.84, stdev=1868.81 00:14:55.998 lat (msec): min=82, max=6691, avg=2844.93, stdev=1886.08 00:14:55.998 clat percentiles (msec): 00:14:55.998 | 1.00th=[ 92], 5.00th=[ 317], 10.00th=[ 584], 20.00th=[ 953], 00:14:55.998 | 30.00th=[ 1603], 40.00th=[ 2299], 50.00th=[ 2769], 60.00th=[ 3171], 00:14:55.998 | 70.00th=[ 3239], 80.00th=[ 3339], 90.00th=[ 6611], 95.00th=[ 6678], 00:14:55.998 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:14:55.998 | 99.99th=[ 6678] 00:14:55.998 bw ( KiB/s): min=20480, max=122880, per=1.66%, avg=55280.40, stdev=39983.37, samples=5 00:14:55.998 iops : min= 20, max= 120, avg=53.80, stdev=39.14, samples=5 00:14:55.998 lat (msec) : 100=1.14%, 250=2.66%, 500=3.04%, 750=9.13%, 1000=4.56% 00:14:55.998 lat (msec) : 2000=13.31%, >=2000=66.16% 00:14:55.998 cpu : usr=0.00%, sys=1.01%, ctx=557, majf=0, minf=32769 00:14:55.998 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.1%, 32=12.2%, >=64=76.0% 00:14:55.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.998 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:14:55.998 issued rwts: total=263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.998 job3: (groupid=0, jobs=1): err= 0: pid=3784922: Fri Dec 6 16:27:48 2024 00:14:55.998 read: IOPS=1, BW=1397KiB/s (1431kB/s)(17.0MiB/12457msec) 00:14:55.998 slat (msec): min=9, max=2111, avg=607.73, stdev=934.55 00:14:55.998 clat (msec): min=2124, max=12444, avg=7932.96, stdev=3338.66 00:14:55.998 lat (msec): min=4167, max=12455, avg=8540.70, stdev=3150.26 00:14:55.998 clat percentiles (msec): 00:14:55.998 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4178], 20.00th=[ 4212], 00:14:55.998 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8557], 00:14:55.998 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12416], 95.00th=[12416], 00:14:55.998 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:14:55.998 | 99.99th=[12416] 00:14:55.998 lat (msec) : >=2000=100.00% 00:14:55.998 cpu : usr=0.00%, sys=0.11%, ctx=64, majf=0, minf=4353 00:14:55.998 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:14:55.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.998 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:14:55.998 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.998 job3: (groupid=0, jobs=1): err= 0: pid=3784923: Fri Dec 6 16:27:48 2024 00:14:55.998 read: IOPS=63, BW=64.0MiB/s (67.1MB/s)(645MiB/10081msec) 00:14:55.998 slat (usec): min=51, max=2079.9k, avg=15503.87, stdev=142749.28 00:14:55.998 clat (msec): min=78, max=7658, avg=1933.16, stdev=2618.93 00:14:55.998 lat (msec): min=82, max=7664, avg=1948.66, stdev=2627.98 00:14:55.998 clat percentiles (msec): 00:14:55.998 | 1.00th=[ 100], 5.00th=[ 220], 10.00th=[ 380], 20.00th=[ 535], 00:14:55.998 | 30.00th=[ 584], 40.00th=[ 600], 50.00th=[ 617], 60.00th=[ 625], 00:14:55.998 | 70.00th=[ 676], 80.00th=[ 3440], 90.00th=[ 7550], 95.00th=[ 7617], 00:14:55.998 | 99.00th=[ 7617], 99.50th=[ 7617], 99.90th=[ 7684], 99.95th=[ 7684], 00:14:55.998 | 99.99th=[ 7684] 00:14:55.998 bw ( KiB/s): min= 2048, max=225280, per=2.89%, avg=96403.09, stdev=92352.38, samples=11 00:14:55.998 iops : min= 2, max= 220, avg=94.09, stdev=90.11, samples=11 00:14:55.998 lat (msec) : 100=1.09%, 250=4.81%, 500=8.53%, 750=59.38%, 2000=3.41% 00:14:55.998 lat (msec) : >=2000=22.79% 00:14:55.998 cpu : usr=0.06%, sys=1.32%, ctx=739, majf=0, minf=32770 00:14:55.998 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:14:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.999 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:55.999 issued rwts: total=645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.999 job4: (groupid=0, jobs=1): err= 0: pid=3784934: Fri Dec 6 16:27:48 2024 00:14:55.999 read: IOPS=126, BW=126MiB/s (132MB/s)(1320MiB/10458msec) 00:14:55.999 slat (usec): min=51, max=2053.3k, avg=7841.59, stdev=92052.88 00:14:55.999 clat (msec): min=99, max=6816, avg=836.97, stdev=923.30 00:14:55.999 lat (msec): min=245, max=6825, avg=844.81, stdev=932.93 00:14:55.999 clat percentiles (msec): 00:14:55.999 | 1.00th=[ 247], 5.00th=[ 249], 10.00th=[ 249], 20.00th=[ 266], 00:14:55.999 | 30.00th=[ 368], 40.00th=[ 376], 50.00th=[ 388], 60.00th=[ 393], 00:14:55.999 | 70.00th=[ 481], 80.00th=[ 2232], 90.00th=[ 2534], 95.00th=[ 2769], 00:14:55.999 | 99.00th=[ 3104], 99.50th=[ 3104], 99.90th=[ 5134], 99.95th=[ 6812], 00:14:55.999 | 99.99th=[ 6812] 00:14:55.999 bw ( KiB/s): min= 4096, max=454656, per=7.31%, avg=244088.40, stdev=152880.77, samples=10 00:14:55.999 iops : min= 4, max= 444, avg=238.30, stdev=149.34, samples=10 00:14:55.999 lat (msec) : 100=0.08%, 250=13.03%, 500=57.73%, 750=5.08%, 1000=3.56% 00:14:55.999 lat (msec) : >=2000=20.53% 00:14:55.999 cpu : usr=0.07%, sys=2.13%, ctx=1285, majf=0, minf=32769 00:14:55.999 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:14:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.999 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.999 issued rwts: total=1320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.999 job4: (groupid=0, jobs=1): err= 0: pid=3784935: Fri Dec 6 16:27:48 2024 00:14:55.999 read: IOPS=26, BW=26.5MiB/s (27.8MB/s)(275MiB/10384msec) 00:14:55.999 slat (usec): min=61, max=2068.0k, avg=37394.00, stdev=226674.25 00:14:55.999 clat (msec): min=98, max=9324, avg=4567.66, stdev=3098.21 00:14:55.999 lat (msec): min=706, max=9330, avg=4605.05, stdev=3097.40 00:14:55.999 clat percentiles (msec): 00:14:55.999 | 1.00th=[ 701], 5.00th=[ 735], 10.00th=[ 894], 20.00th=[ 1053], 00:14:55.999 | 30.00th=[ 1083], 40.00th=[ 3440], 50.00th=[ 4144], 60.00th=[ 5470], 00:14:55.999 | 70.00th=[ 6342], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 9060], 00:14:55.999 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:14:55.999 | 99.99th=[ 9329] 00:14:55.999 bw ( KiB/s): min= 2048, max=71680, per=1.13%, avg=37632.00, stdev=28837.40, samples=8 00:14:55.999 iops : min= 2, max= 70, avg=36.75, stdev=28.16, samples=8 00:14:55.999 lat (msec) : 100=0.36%, 750=5.45%, 1000=8.36%, 2000=18.18%, >=2000=67.64% 00:14:55.999 cpu : usr=0.02%, sys=0.78%, ctx=681, majf=0, minf=32769 00:14:55.999 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.1% 00:14:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.999 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:14:55.999 issued rwts: total=275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.999 job4: (groupid=0, jobs=1): err= 0: pid=3784936: Fri Dec 6 16:27:48 2024 00:14:55.999 read: IOPS=79, BW=79.4MiB/s (83.3MB/s)(795MiB/10010msec) 00:14:55.999 slat (usec): min=41, max=2074.8k, avg=12576.42, stdev=119614.01 00:14:55.999 clat (msec): min=8, max=6658, avg=708.85, stdev=721.03 00:14:55.999 lat (msec): min=9, max=6705, avg=721.43, stdev=751.55 00:14:55.999 clat percentiles (msec): 00:14:55.999 | 1.00th=[ 22], 5.00th=[ 153], 10.00th=[ 305], 20.00th=[ 468], 00:14:55.999 | 30.00th=[ 550], 40.00th=[ 592], 50.00th=[ 617], 60.00th=[ 651], 00:14:55.999 | 70.00th=[ 718], 80.00th=[ 760], 90.00th=[ 802], 95.00th=[ 860], 00:14:55.999 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 6678], 99.95th=[ 6678], 00:14:55.999 | 99.99th=[ 6678] 00:14:55.999 bw ( KiB/s): min=141312, max=264192, per=5.85%, avg=195437.71, stdev=46096.79, samples=7 00:14:55.999 iops : min= 138, max= 258, avg=190.86, stdev=45.02, samples=7 00:14:55.999 lat (msec) : 10=0.25%, 20=0.63%, 50=1.89%, 100=1.76%, 250=3.90% 00:14:55.999 lat (msec) : 500=18.99%, 750=51.07%, 1000=17.86%, >=2000=3.65% 00:14:55.999 cpu : usr=0.01%, sys=1.02%, ctx=1010, majf=0, minf=32769 00:14:55.999 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:14:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.999 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.999 issued rwts: total=795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.999 job4: (groupid=0, jobs=1): err= 0: pid=3784938: Fri Dec 6 16:27:48 2024 00:14:55.999 read: IOPS=30, BW=30.0MiB/s (31.5MB/s)(312MiB/10397msec) 00:14:55.999 slat (usec): min=345, max=2145.3k, avg=33000.76, stdev=232610.68 00:14:55.999 clat (msec): min=99, max=9407, avg=4114.26, stdev=3818.05 00:14:55.999 lat (msec): min=499, max=9418, avg=4147.26, stdev=3821.14 00:14:55.999 clat percentiles (msec): 00:14:55.999 | 1.00th=[ 498], 5.00th=[ 523], 10.00th=[ 558], 20.00th=[ 718], 00:14:55.999 | 30.00th=[ 835], 40.00th=[ 936], 50.00th=[ 1020], 60.00th=[ 5201], 00:14:55.999 | 70.00th=[ 8792], 80.00th=[ 8926], 90.00th=[ 9194], 95.00th=[ 9329], 00:14:55.999 | 99.00th=[ 9329], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:14:55.999 | 99.99th=[ 9463] 00:14:55.999 bw ( KiB/s): min= 2043, max=139264, per=1.61%, avg=53829.43, stdev=57950.25, samples=7 00:14:55.999 iops : min= 1, max= 136, avg=52.29, stdev=56.86, samples=7 00:14:55.999 lat (msec) : 100=0.32%, 500=1.92%, 750=20.51%, 1000=24.04%, 2000=5.77% 00:14:55.999 lat (msec) : >=2000=47.44% 00:14:55.999 cpu : usr=0.00%, sys=0.64%, ctx=449, majf=0, minf=32769 00:14:55.999 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.3%, >=64=79.8% 00:14:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.999 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:14:55.999 issued rwts: total=312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.999 job4: (groupid=0, jobs=1): err= 0: pid=3784939: Fri Dec 6 16:27:48 2024 00:14:55.999 read: IOPS=47, BW=47.1MiB/s (49.4MB/s)(473MiB/10035msec) 00:14:55.999 slat (usec): min=347, max=3637.7k, avg=21173.08, stdev=191470.75 00:14:55.999 clat (msec): min=18, max=6714, avg=1486.55, stdev=1508.08 00:14:55.999 lat (msec): min=38, max=6751, avg=1507.73, stdev=1526.52 00:14:55.999 clat percentiles (msec): 00:14:55.999 | 1.00th=[ 72], 5.00th=[ 234], 10.00th=[ 464], 20.00th=[ 961], 00:14:55.999 | 30.00th=[ 1099], 40.00th=[ 1167], 50.00th=[ 1217], 60.00th=[ 1234], 00:14:55.999 | 70.00th=[ 1250], 80.00th=[ 1284], 90.00th=[ 1334], 95.00th=[ 6544], 00:14:55.999 | 99.00th=[ 6678], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:14:55.999 | 99.99th=[ 6745] 00:14:55.999 bw ( KiB/s): min=81920, max=124928, per=3.02%, avg=100870.14, stdev=13779.35, samples=7 00:14:55.999 iops : min= 80, max= 122, avg=98.29, stdev=13.52, samples=7 00:14:55.999 lat (msec) : 20=0.21%, 50=0.21%, 100=1.90%, 250=3.17%, 500=4.86% 00:14:55.999 lat (msec) : 750=5.29%, 1000=7.40%, 2000=68.08%, >=2000=8.88% 00:14:55.999 cpu : usr=0.00%, sys=0.81%, ctx=920, majf=0, minf=32769 00:14:55.999 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:14:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.999 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:14:55.999 issued rwts: total=473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.999 job4: (groupid=0, jobs=1): err= 0: pid=3784940: Fri Dec 6 16:27:48 2024 00:14:55.999 read: IOPS=63, BW=63.6MiB/s (66.7MB/s)(662MiB/10406msec) 00:14:55.999 slat (usec): min=50, max=2067.6k, avg=15569.21, stdev=150756.85 00:14:55.999 clat (msec): min=96, max=6483, avg=973.32, stdev=1052.15 00:14:55.999 lat (msec): min=301, max=6553, avg=988.89, stdev=1075.28 00:14:55.999 clat percentiles (msec): 00:14:55.999 | 1.00th=[ 309], 5.00th=[ 313], 10.00th=[ 326], 20.00th=[ 351], 00:14:55.999 | 30.00th=[ 363], 40.00th=[ 376], 50.00th=[ 388], 60.00th=[ 498], 00:14:55.999 | 70.00th=[ 634], 80.00th=[ 2299], 90.00th=[ 2467], 95.00th=[ 2668], 00:14:55.999 | 99.00th=[ 4799], 99.50th=[ 6477], 99.90th=[ 6477], 99.95th=[ 6477], 00:14:55.999 | 99.99th=[ 6477] 00:14:55.999 bw ( KiB/s): min=134898, max=385024, per=8.18%, avg=273340.50, stdev=120390.42, samples=4 00:14:55.999 iops : min= 131, max= 376, avg=266.75, stdev=117.85, samples=4 00:14:55.999 lat (msec) : 100=0.15%, 500=60.27%, 750=14.20%, >=2000=25.38% 00:14:55.999 cpu : usr=0.02%, sys=1.02%, ctx=740, majf=0, minf=32769 00:14:55.999 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:14:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.999 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:55.999 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.999 job4: (groupid=0, jobs=1): err= 0: pid=3784941: Fri Dec 6 16:27:48 2024 00:14:55.999 read: IOPS=128, BW=129MiB/s (135MB/s)(1290MiB/10025msec) 00:14:55.999 slat (usec): min=30, max=2095.3k, avg=7753.29, stdev=58838.46 00:14:55.999 clat (msec): min=18, max=3091, avg=934.39, stdev=719.07 00:14:55.999 lat (msec): min=65, max=3094, avg=942.15, stdev=721.35 00:14:55.999 clat percentiles (msec): 00:14:55.999 | 1.00th=[ 155], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 355], 00:14:55.999 | 30.00th=[ 575], 40.00th=[ 684], 50.00th=[ 751], 60.00th=[ 835], 00:14:55.999 | 70.00th=[ 969], 80.00th=[ 1267], 90.00th=[ 1435], 95.00th=[ 2869], 00:14:55.999 | 99.00th=[ 3071], 99.50th=[ 3071], 99.90th=[ 3104], 99.95th=[ 3104], 00:14:55.999 | 99.99th=[ 3104] 00:14:55.999 bw ( KiB/s): min=22528, max=446464, per=4.46%, avg=148840.00, stdev=95708.46, samples=16 00:14:55.999 iops : min= 22, max= 436, avg=145.31, stdev=93.49, samples=16 00:14:55.999 lat (msec) : 20=0.08%, 100=0.47%, 250=10.70%, 500=15.97%, 750=23.33% 00:14:55.999 lat (msec) : 1000=19.84%, 2000=19.77%, >=2000=9.84% 00:14:55.999 cpu : usr=0.04%, sys=1.41%, ctx=1808, majf=0, minf=32769 00:14:55.999 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:14:55.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:55.999 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:55.999 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:55.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:55.999 job4: (groupid=0, jobs=1): err= 0: pid=3784942: Fri Dec 6 16:27:48 2024 00:14:55.999 read: IOPS=16, BW=16.8MiB/s (17.6MB/s)(175MiB/10440msec) 00:14:55.999 slat (usec): min=406, max=2078.0k, avg=59109.94, stdev=272396.39 00:14:55.999 clat (msec): min=94, max=9637, avg=6843.09, stdev=2751.57 00:14:56.000 lat (msec): min=2008, max=9642, avg=6902.20, stdev=2705.19 00:14:56.000 clat percentiles (msec): 00:14:56.000 | 1.00th=[ 1989], 5.00th=[ 2039], 10.00th=[ 2198], 20.00th=[ 3440], 00:14:56.000 | 30.00th=[ 5470], 40.00th=[ 7483], 50.00th=[ 8221], 60.00th=[ 8658], 00:14:56.000 | 70.00th=[ 8926], 80.00th=[ 9194], 90.00th=[ 9463], 95.00th=[ 9597], 00:14:56.000 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:14:56.000 | 99.99th=[ 9597] 00:14:56.000 bw ( KiB/s): min= 6144, max=28672, per=0.48%, avg=16042.67, stdev=10150.87, samples=6 00:14:56.000 iops : min= 6, max= 28, avg=15.67, stdev= 9.91, samples=6 00:14:56.000 lat (msec) : 100=0.57%, 2000=1.71%, >=2000=97.71% 00:14:56.000 cpu : usr=0.00%, sys=0.74%, ctx=543, majf=0, minf=32769 00:14:56.000 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.6%, 16=9.1%, 32=18.3%, >=64=64.0% 00:14:56.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.000 complete : 0=0.0%, 4=98.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.0% 00:14:56.000 issued rwts: total=175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.000 job4: (groupid=0, jobs=1): err= 0: pid=3784943: Fri Dec 6 16:27:48 2024 00:14:56.000 read: IOPS=4, BW=4899KiB/s (5017kB/s)(50.0MiB/10451msec) 00:14:56.000 slat (usec): min=770, max=2072.6k, avg=207027.46, stdev=599664.65 00:14:56.000 clat (msec): min=98, max=10449, avg=7434.76, stdev=3188.67 00:14:56.000 lat (msec): min=2133, max=10450, avg=7641.79, stdev=3034.98 00:14:56.000 clat percentiles (msec): 00:14:56.000 | 1.00th=[ 100], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:14:56.000 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[10268], 00:14:56.000 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:14:56.000 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:14:56.000 | 99.99th=[10402] 00:14:56.000 lat (msec) : 100=2.00%, >=2000=98.00% 00:14:56.000 cpu : usr=0.00%, sys=0.34%, ctx=79, majf=0, minf=12801 00:14:56.000 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:14:56.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.000 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:14:56.000 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.000 job4: (groupid=0, jobs=1): err= 0: pid=3784944: Fri Dec 6 16:27:48 2024 00:14:56.000 read: IOPS=32, BW=32.2MiB/s (33.8MB/s)(340MiB/10556msec) 00:14:56.000 slat (usec): min=109, max=2070.2k, avg=30753.79, stdev=194262.31 00:14:56.000 clat (msec): min=97, max=7805, avg=3718.14, stdev=2696.57 00:14:56.000 lat (msec): min=963, max=7808, avg=3748.89, stdev=2692.62 00:14:56.000 clat percentiles (msec): 00:14:56.000 | 1.00th=[ 961], 5.00th=[ 1070], 10.00th=[ 1284], 20.00th=[ 1536], 00:14:56.000 | 30.00th=[ 1770], 40.00th=[ 1838], 50.00th=[ 1888], 60.00th=[ 2165], 00:14:56.000 | 70.00th=[ 7148], 80.00th=[ 7282], 90.00th=[ 7416], 95.00th=[ 7617], 00:14:56.000 | 99.00th=[ 7752], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7819], 00:14:56.000 | 99.99th=[ 7819] 00:14:56.000 bw ( KiB/s): min=10240, max=174080, per=1.62%, avg=54249.50, stdev=56198.48, samples=8 00:14:56.000 iops : min= 10, max= 170, avg=52.88, stdev=54.81, samples=8 00:14:56.000 lat (msec) : 100=0.29%, 1000=2.35%, 2000=56.47%, >=2000=40.88% 00:14:56.000 cpu : usr=0.01%, sys=1.16%, ctx=615, majf=0, minf=32769 00:14:56.000 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.4%, >=64=81.5% 00:14:56.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.000 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:14:56.000 issued rwts: total=340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.000 job4: (groupid=0, jobs=1): err= 0: pid=3784945: Fri Dec 6 16:27:48 2024 00:14:56.000 read: IOPS=98, BW=98.2MiB/s (103MB/s)(993MiB/10110msec) 00:14:56.000 slat (usec): min=58, max=81385, avg=10083.43, stdev=12249.26 00:14:56.000 clat (msec): min=92, max=1728, avg=1220.64, stdev=343.18 00:14:56.000 lat (msec): min=167, max=1737, avg=1230.72, stdev=343.73 00:14:56.000 clat percentiles (msec): 00:14:56.000 | 1.00th=[ 275], 5.00th=[ 776], 10.00th=[ 827], 20.00th=[ 860], 00:14:56.000 | 30.00th=[ 919], 40.00th=[ 1167], 50.00th=[ 1267], 60.00th=[ 1368], 00:14:56.000 | 70.00th=[ 1469], 80.00th=[ 1569], 90.00th=[ 1636], 95.00th=[ 1670], 00:14:56.000 | 99.00th=[ 1720], 99.50th=[ 1720], 99.90th=[ 1737], 99.95th=[ 1737], 00:14:56.000 | 99.99th=[ 1737] 00:14:56.000 bw ( KiB/s): min=20480, max=169984, per=2.95%, avg=98410.50, stdev=37140.72, samples=18 00:14:56.000 iops : min= 20, max= 166, avg=96.06, stdev=36.32, samples=18 00:14:56.000 lat (msec) : 100=0.10%, 250=0.81%, 500=1.71%, 750=2.11%, 1000=26.79% 00:14:56.000 lat (msec) : 2000=68.48% 00:14:56.000 cpu : usr=0.03%, sys=1.74%, ctx=1848, majf=0, minf=32769 00:14:56.000 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:14:56.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.000 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.000 issued rwts: total=993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.000 job4: (groupid=0, jobs=1): err= 0: pid=3784946: Fri Dec 6 16:27:48 2024 00:14:56.000 read: IOPS=55, BW=55.1MiB/s (57.8MB/s)(581MiB/10544msec) 00:14:56.000 slat (usec): min=33, max=1880.1k, avg=17975.69, stdev=115803.70 00:14:56.000 clat (msec): min=97, max=4038, avg=2108.85, stdev=996.55 00:14:56.000 lat (msec): min=383, max=4039, avg=2126.82, stdev=991.69 00:14:56.000 clat percentiles (msec): 00:14:56.000 | 1.00th=[ 384], 5.00th=[ 506], 10.00th=[ 936], 20.00th=[ 1418], 00:14:56.000 | 30.00th=[ 1636], 40.00th=[ 1854], 50.00th=[ 1989], 60.00th=[ 2072], 00:14:56.000 | 70.00th=[ 2232], 80.00th=[ 2769], 90.00th=[ 3876], 95.00th=[ 3943], 00:14:56.000 | 99.00th=[ 4044], 99.50th=[ 4044], 99.90th=[ 4044], 99.95th=[ 4044], 00:14:56.000 | 99.99th=[ 4044] 00:14:56.000 bw ( KiB/s): min= 8192, max=204800, per=2.78%, avg=92749.80, stdev=64657.51, samples=10 00:14:56.000 iops : min= 8, max= 200, avg=90.50, stdev=63.10, samples=10 00:14:56.000 lat (msec) : 100=0.17%, 500=4.65%, 750=2.58%, 1000=6.71%, 2000=37.52% 00:14:56.000 lat (msec) : >=2000=48.36% 00:14:56.000 cpu : usr=0.02%, sys=1.14%, ctx=886, majf=0, minf=32769 00:14:56.000 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.2% 00:14:56.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.000 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:56.000 issued rwts: total=581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.000 job4: (groupid=0, jobs=1): err= 0: pid=3784947: Fri Dec 6 16:27:48 2024 00:14:56.000 read: IOPS=110, BW=111MiB/s (116MB/s)(1126MiB/10163msec) 00:14:56.000 slat (usec): min=42, max=130778, avg=8938.69, stdev=12425.69 00:14:56.000 clat (msec): min=91, max=1903, avg=1085.13, stdev=510.13 00:14:56.000 lat (msec): min=166, max=1911, avg=1094.07, stdev=512.83 00:14:56.000 clat percentiles (msec): 00:14:56.000 | 1.00th=[ 288], 5.00th=[ 342], 10.00th=[ 401], 20.00th=[ 510], 00:14:56.000 | 30.00th=[ 625], 40.00th=[ 844], 50.00th=[ 1183], 60.00th=[ 1385], 00:14:56.000 | 70.00th=[ 1519], 80.00th=[ 1620], 90.00th=[ 1703], 95.00th=[ 1770], 00:14:56.000 | 99.00th=[ 1854], 99.50th=[ 1871], 99.90th=[ 1905], 99.95th=[ 1905], 00:14:56.000 | 99.99th=[ 1905] 00:14:56.000 bw ( KiB/s): min=26624, max=335872, per=3.40%, avg=113543.17, stdev=76091.95, samples=18 00:14:56.000 iops : min= 26, max= 328, avg=110.83, stdev=74.34, samples=18 00:14:56.000 lat (msec) : 100=0.09%, 250=0.71%, 500=18.65%, 750=17.67%, 1000=8.08% 00:14:56.000 lat (msec) : 2000=54.80% 00:14:56.000 cpu : usr=0.08%, sys=1.93%, ctx=1877, majf=0, minf=32769 00:14:56.000 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:14:56.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.000 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.000 issued rwts: total=1126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.000 job5: (groupid=0, jobs=1): err= 0: pid=3784954: Fri Dec 6 16:27:48 2024 00:14:56.000 read: IOPS=192, BW=192MiB/s (202MB/s)(1931MiB/10032msec) 00:14:56.000 slat (usec): min=37, max=2075.8k, avg=5176.17, stdev=60658.52 00:14:56.000 clat (msec): min=30, max=4637, avg=445.45, stdev=551.30 00:14:56.000 lat (msec): min=31, max=4662, avg=450.63, stdev=559.92 00:14:56.000 clat percentiles (msec): 00:14:56.000 | 1.00th=[ 71], 5.00th=[ 228], 10.00th=[ 255], 20.00th=[ 257], 00:14:56.000 | 30.00th=[ 259], 40.00th=[ 262], 50.00th=[ 355], 60.00th=[ 393], 00:14:56.000 | 70.00th=[ 435], 80.00th=[ 514], 90.00th=[ 693], 95.00th=[ 751], 00:14:56.000 | 99.00th=[ 4597], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:14:56.000 | 99.99th=[ 4665] 00:14:56.000 bw ( KiB/s): min=143360, max=503808, per=10.06%, avg=335872.00, stdev=128649.62, samples=11 00:14:56.000 iops : min= 140, max= 492, avg=328.00, stdev=125.63, samples=11 00:14:56.000 lat (msec) : 50=0.47%, 100=1.29%, 250=3.83%, 500=73.85%, 750=15.28% 00:14:56.000 lat (msec) : 1000=3.57%, >=2000=1.71% 00:14:56.000 cpu : usr=0.05%, sys=2.05%, ctx=1993, majf=0, minf=32769 00:14:56.000 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:14:56.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.000 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.000 issued rwts: total=1931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.000 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.000 job5: (groupid=0, jobs=1): err= 0: pid=3784955: Fri Dec 6 16:27:48 2024 00:14:56.000 read: IOPS=52, BW=52.9MiB/s (55.5MB/s)(533MiB/10074msec) 00:14:56.000 slat (usec): min=41, max=2086.2k, avg=18759.64, stdev=115300.10 00:14:56.000 clat (msec): min=73, max=4269, avg=1921.85, stdev=1252.83 00:14:56.000 lat (msec): min=74, max=4289, avg=1940.60, stdev=1256.78 00:14:56.000 clat percentiles (msec): 00:14:56.000 | 1.00th=[ 116], 5.00th=[ 197], 10.00th=[ 439], 20.00th=[ 1045], 00:14:56.000 | 30.00th=[ 1250], 40.00th=[ 1351], 50.00th=[ 1401], 60.00th=[ 1586], 00:14:56.000 | 70.00th=[ 2005], 80.00th=[ 3440], 90.00th=[ 4010], 95.00th=[ 4245], 00:14:56.000 | 99.00th=[ 4245], 99.50th=[ 4279], 99.90th=[ 4279], 99.95th=[ 4279], 00:14:56.000 | 99.99th=[ 4279] 00:14:56.000 bw ( KiB/s): min= 4096, max=186368, per=2.26%, avg=75540.64, stdev=58370.81, samples=11 00:14:56.000 iops : min= 4, max= 182, avg=73.64, stdev=56.85, samples=11 00:14:56.000 lat (msec) : 100=0.75%, 250=6.38%, 500=3.38%, 750=2.06%, 1000=5.44% 00:14:56.000 lat (msec) : 2000=51.59%, >=2000=30.39% 00:14:56.000 cpu : usr=0.03%, sys=1.18%, ctx=1682, majf=0, minf=32769 00:14:56.001 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:14:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.001 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:56.001 issued rwts: total=533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.001 job5: (groupid=0, jobs=1): err= 0: pid=3784956: Fri Dec 6 16:27:48 2024 00:14:56.001 read: IOPS=111, BW=112MiB/s (117MB/s)(1125MiB/10051msec) 00:14:56.001 slat (usec): min=25, max=2058.8k, avg=8914.38, stdev=81122.89 00:14:56.001 clat (msec): min=19, max=4699, avg=689.05, stdev=385.33 00:14:56.001 lat (msec): min=79, max=4714, avg=697.96, stdev=404.25 00:14:56.001 clat percentiles (msec): 00:14:56.001 | 1.00th=[ 91], 5.00th=[ 372], 10.00th=[ 397], 20.00th=[ 435], 00:14:56.001 | 30.00th=[ 472], 40.00th=[ 518], 50.00th=[ 726], 60.00th=[ 776], 00:14:56.001 | 70.00th=[ 852], 80.00th=[ 885], 90.00th=[ 944], 95.00th=[ 969], 00:14:56.001 | 99.00th=[ 2903], 99.50th=[ 2937], 99.90th=[ 4665], 99.95th=[ 4732], 00:14:56.001 | 99.99th=[ 4732] 00:14:56.001 bw ( KiB/s): min=88064, max=311296, per=5.54%, avg=184921.09, stdev=74130.38, samples=11 00:14:56.001 iops : min= 86, max= 304, avg=180.45, stdev=72.35, samples=11 00:14:56.001 lat (msec) : 20=0.09%, 100=1.33%, 250=1.42%, 500=33.87%, 750=20.09% 00:14:56.001 lat (msec) : 1000=40.80%, 2000=1.24%, >=2000=1.16% 00:14:56.001 cpu : usr=0.02%, sys=1.01%, ctx=1171, majf=0, minf=32769 00:14:56.001 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:14:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.001 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.001 issued rwts: total=1125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.001 job5: (groupid=0, jobs=1): err= 0: pid=3784957: Fri Dec 6 16:27:48 2024 00:14:56.001 read: IOPS=75, BW=75.9MiB/s (79.6MB/s)(765MiB/10073msec) 00:14:56.001 slat (usec): min=46, max=2124.9k, avg=13085.38, stdev=123488.65 00:14:56.001 clat (msec): min=58, max=4599, avg=1029.72, stdev=902.39 00:14:56.001 lat (msec): min=108, max=4616, avg=1042.81, stdev=913.48 00:14:56.001 clat percentiles (msec): 00:14:56.001 | 1.00th=[ 123], 5.00th=[ 271], 10.00th=[ 426], 20.00th=[ 600], 00:14:56.001 | 30.00th=[ 600], 40.00th=[ 634], 50.00th=[ 659], 60.00th=[ 709], 00:14:56.001 | 70.00th=[ 735], 80.00th=[ 793], 90.00th=[ 2769], 95.00th=[ 2802], 00:14:56.001 | 99.00th=[ 2869], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:14:56.001 | 99.99th=[ 4597] 00:14:56.001 bw ( KiB/s): min=45056, max=222786, per=4.82%, avg=160933.00, stdev=63850.86, samples=8 00:14:56.001 iops : min= 44, max= 217, avg=157.00, stdev=62.25, samples=8 00:14:56.001 lat (msec) : 100=0.13%, 250=4.05%, 500=7.45%, 750=61.05%, 1000=8.76% 00:14:56.001 lat (msec) : >=2000=18.56% 00:14:56.001 cpu : usr=0.05%, sys=1.58%, ctx=665, majf=0, minf=32769 00:14:56.001 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:14:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.001 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:56.001 issued rwts: total=765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.001 job5: (groupid=0, jobs=1): err= 0: pid=3784958: Fri Dec 6 16:27:48 2024 00:14:56.001 read: IOPS=61, BW=61.3MiB/s (64.3MB/s)(644MiB/10499msec) 00:14:56.001 slat (usec): min=330, max=1989.9k, avg=16174.22, stdev=119792.13 00:14:56.001 clat (msec): min=79, max=4407, avg=1666.07, stdev=1037.20 00:14:56.001 lat (msec): min=768, max=4415, avg=1682.24, stdev=1040.85 00:14:56.001 clat percentiles (msec): 00:14:56.001 | 1.00th=[ 768], 5.00th=[ 776], 10.00th=[ 802], 20.00th=[ 894], 00:14:56.001 | 30.00th=[ 927], 40.00th=[ 1028], 50.00th=[ 1053], 60.00th=[ 1250], 00:14:56.001 | 70.00th=[ 2433], 80.00th=[ 2635], 90.00th=[ 3104], 95.00th=[ 4329], 00:14:56.001 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4396], 99.95th=[ 4396], 00:14:56.001 | 99.99th=[ 4396] 00:14:56.001 bw ( KiB/s): min=51200, max=169984, per=3.52%, avg=117418.67, stdev=42146.03, samples=9 00:14:56.001 iops : min= 50, max= 166, avg=114.67, stdev=41.16, samples=9 00:14:56.001 lat (msec) : 100=0.16%, 750=0.16%, 1000=37.73%, 2000=26.86%, >=2000=35.09% 00:14:56.001 cpu : usr=0.03%, sys=1.34%, ctx=1351, majf=0, minf=32769 00:14:56.001 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:14:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.001 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:56.001 issued rwts: total=644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.001 job5: (groupid=0, jobs=1): err= 0: pid=3784959: Fri Dec 6 16:27:48 2024 00:14:56.001 read: IOPS=152, BW=153MiB/s (160MB/s)(1542MiB/10100msec) 00:14:56.001 slat (usec): min=27, max=1645.4k, avg=6505.98, stdev=47501.18 00:14:56.001 clat (msec): min=61, max=2207, avg=696.10, stdev=351.00 00:14:56.001 lat (msec): min=119, max=2211, avg=702.61, stdev=353.71 00:14:56.001 clat percentiles (msec): 00:14:56.001 | 1.00th=[ 140], 5.00th=[ 253], 10.00th=[ 279], 20.00th=[ 376], 00:14:56.001 | 30.00th=[ 550], 40.00th=[ 609], 50.00th=[ 667], 60.00th=[ 735], 00:14:56.001 | 70.00th=[ 810], 80.00th=[ 902], 90.00th=[ 1167], 95.00th=[ 1284], 00:14:56.001 | 99.00th=[ 2198], 99.50th=[ 2198], 99.90th=[ 2198], 99.95th=[ 2198], 00:14:56.001 | 99.99th=[ 2198] 00:14:56.001 bw ( KiB/s): min=100352, max=342016, per=5.78%, avg=193004.67, stdev=64204.91, samples=15 00:14:56.001 iops : min= 98, max= 334, avg=188.40, stdev=62.69, samples=15 00:14:56.001 lat (msec) : 100=0.06%, 250=2.79%, 500=25.55%, 750=34.82%, 1000=22.37% 00:14:56.001 lat (msec) : 2000=12.58%, >=2000=1.82% 00:14:56.001 cpu : usr=0.07%, sys=1.69%, ctx=2117, majf=0, minf=32769 00:14:56.001 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:14:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.001 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.001 issued rwts: total=1542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.001 job5: (groupid=0, jobs=1): err= 0: pid=3784960: Fri Dec 6 16:27:48 2024 00:14:56.001 read: IOPS=102, BW=103MiB/s (108MB/s)(1037MiB/10074msec) 00:14:56.001 slat (usec): min=30, max=2079.9k, avg=9644.20, stdev=82848.89 00:14:56.001 clat (msec): min=68, max=3568, avg=1002.10, stdev=972.52 00:14:56.001 lat (msec): min=76, max=3580, avg=1011.74, stdev=976.79 00:14:56.001 clat percentiles (msec): 00:14:56.001 | 1.00th=[ 215], 5.00th=[ 376], 10.00th=[ 376], 20.00th=[ 393], 00:14:56.001 | 30.00th=[ 456], 40.00th=[ 584], 50.00th=[ 659], 60.00th=[ 776], 00:14:56.001 | 70.00th=[ 860], 80.00th=[ 969], 90.00th=[ 3373], 95.00th=[ 3507], 00:14:56.001 | 99.00th=[ 3540], 99.50th=[ 3540], 99.90th=[ 3540], 99.95th=[ 3574], 00:14:56.001 | 99.99th=[ 3574] 00:14:56.001 bw ( KiB/s): min=10240, max=331776, per=4.85%, avg=161978.18, stdev=107278.67, samples=11 00:14:56.001 iops : min= 10, max= 324, avg=158.18, stdev=104.76, samples=11 00:14:56.001 lat (msec) : 100=0.39%, 250=0.96%, 500=33.56%, 750=23.72%, 1000=25.17% 00:14:56.001 lat (msec) : 2000=2.03%, >=2000=14.18% 00:14:56.001 cpu : usr=0.01%, sys=1.42%, ctx=1579, majf=0, minf=32769 00:14:56.001 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:14:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.001 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.001 issued rwts: total=1037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.001 job5: (groupid=0, jobs=1): err= 0: pid=3784961: Fri Dec 6 16:27:48 2024 00:14:56.001 read: IOPS=63, BW=63.7MiB/s (66.8MB/s)(641MiB/10068msec) 00:14:56.001 slat (usec): min=364, max=2091.0k, avg=15619.04, stdev=105487.78 00:14:56.001 clat (msec): min=53, max=3942, avg=1544.61, stdev=1082.36 00:14:56.001 lat (msec): min=78, max=3955, avg=1560.23, stdev=1087.30 00:14:56.001 clat percentiles (msec): 00:14:56.001 | 1.00th=[ 155], 5.00th=[ 567], 10.00th=[ 584], 20.00th=[ 676], 00:14:56.001 | 30.00th=[ 902], 40.00th=[ 1011], 50.00th=[ 1133], 60.00th=[ 1334], 00:14:56.001 | 70.00th=[ 1603], 80.00th=[ 2937], 90.00th=[ 3440], 95.00th=[ 3842], 00:14:56.001 | 99.00th=[ 3943], 99.50th=[ 3943], 99.90th=[ 3943], 99.95th=[ 3943], 00:14:56.001 | 99.99th=[ 3943] 00:14:56.001 bw ( KiB/s): min= 4096, max=225280, per=2.62%, avg=87454.17, stdev=74033.16, samples=12 00:14:56.001 iops : min= 4, max= 220, avg=85.25, stdev=72.43, samples=12 00:14:56.001 lat (msec) : 100=0.62%, 250=1.56%, 500=0.47%, 750=20.59%, 1000=15.29% 00:14:56.001 lat (msec) : 2000=40.72%, >=2000=20.75% 00:14:56.001 cpu : usr=0.05%, sys=0.93%, ctx=2008, majf=0, minf=32769 00:14:56.001 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:14:56.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.001 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:56.001 issued rwts: total=641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.001 job5: (groupid=0, jobs=1): err= 0: pid=3784962: Fri Dec 6 16:27:48 2024 00:14:56.001 read: IOPS=112, BW=112MiB/s (118MB/s)(1169MiB/10431msec) 00:14:56.001 slat (usec): min=36, max=1994.5k, avg=8849.94, stdev=96018.87 00:14:56.001 clat (msec): min=81, max=4286, avg=713.94, stdev=818.66 00:14:56.001 lat (msec): min=240, max=4308, avg=722.79, stdev=828.07 00:14:56.001 clat percentiles (msec): 00:14:56.002 | 1.00th=[ 251], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 257], 00:14:56.002 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 279], 60.00th=[ 550], 00:14:56.002 | 70.00th=[ 667], 80.00th=[ 743], 90.00th=[ 2366], 95.00th=[ 2869], 00:14:56.002 | 99.00th=[ 3239], 99.50th=[ 4178], 99.90th=[ 4279], 99.95th=[ 4279], 00:14:56.002 | 99.99th=[ 4279] 00:14:56.002 bw ( KiB/s): min=61440, max=503808, per=7.09%, avg=236885.33, stdev=161259.68, samples=9 00:14:56.002 iops : min= 60, max= 492, avg=231.33, stdev=157.48, samples=9 00:14:56.002 lat (msec) : 100=0.09%, 250=0.60%, 500=57.49%, 750=22.84%, 1000=3.51% 00:14:56.002 lat (msec) : 2000=3.51%, >=2000=11.98% 00:14:56.002 cpu : usr=0.02%, sys=1.09%, ctx=1265, majf=0, minf=32769 00:14:56.002 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:14:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.002 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.002 issued rwts: total=1169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.002 job5: (groupid=0, jobs=1): err= 0: pid=3784963: Fri Dec 6 16:27:48 2024 00:14:56.002 read: IOPS=159, BW=159MiB/s (167MB/s)(1594MiB/10018msec) 00:14:56.002 slat (usec): min=31, max=1662.6k, avg=6271.50, stdev=42426.72 00:14:56.002 clat (msec): min=15, max=2121, avg=636.81, stdev=331.38 00:14:56.002 lat (msec): min=17, max=2219, avg=643.08, stdev=335.31 00:14:56.002 clat percentiles (msec): 00:14:56.002 | 1.00th=[ 46], 5.00th=[ 284], 10.00th=[ 368], 20.00th=[ 372], 00:14:56.002 | 30.00th=[ 393], 40.00th=[ 477], 50.00th=[ 558], 60.00th=[ 684], 00:14:56.002 | 70.00th=[ 735], 80.00th=[ 793], 90.00th=[ 1116], 95.00th=[ 1452], 00:14:56.002 | 99.00th=[ 1569], 99.50th=[ 1569], 99.90th=[ 1569], 99.95th=[ 2123], 00:14:56.002 | 99.99th=[ 2123] 00:14:56.002 bw ( KiB/s): min=26624, max=339968, per=5.46%, avg=182284.27, stdev=88464.70, samples=15 00:14:56.002 iops : min= 26, max= 332, avg=178.00, stdev=86.40, samples=15 00:14:56.002 lat (msec) : 20=0.13%, 50=1.00%, 100=1.19%, 250=2.13%, 500=38.02% 00:14:56.002 lat (msec) : 750=29.86%, 1000=13.49%, 2000=14.12%, >=2000=0.06% 00:14:56.002 cpu : usr=0.06%, sys=1.39%, ctx=2456, majf=0, minf=32769 00:14:56.002 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:14:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.002 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.002 issued rwts: total=1594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.002 job5: (groupid=0, jobs=1): err= 0: pid=3784964: Fri Dec 6 16:27:48 2024 00:14:56.002 read: IOPS=83, BW=83.4MiB/s (87.4MB/s)(835MiB/10015msec) 00:14:56.002 slat (usec): min=71, max=2067.2k, avg=11973.17, stdev=92337.24 00:14:56.002 clat (msec): min=13, max=4672, avg=913.23, stdev=623.29 00:14:56.002 lat (msec): min=15, max=4681, avg=925.21, stdev=639.16 00:14:56.002 clat percentiles (msec): 00:14:56.002 | 1.00th=[ 22], 5.00th=[ 74], 10.00th=[ 228], 20.00th=[ 518], 00:14:56.002 | 30.00th=[ 550], 40.00th=[ 584], 50.00th=[ 760], 60.00th=[ 818], 00:14:56.002 | 70.00th=[ 1217], 80.00th=[ 1435], 90.00th=[ 1720], 95.00th=[ 1972], 00:14:56.002 | 99.00th=[ 2937], 99.50th=[ 2970], 99.90th=[ 4665], 99.95th=[ 4665], 00:14:56.002 | 99.99th=[ 4665] 00:14:56.002 bw ( KiB/s): min=55185, max=239616, per=3.43%, avg=114676.90, stdev=60604.29, samples=10 00:14:56.002 iops : min= 53, max= 234, avg=111.90, stdev=59.28, samples=10 00:14:56.002 lat (msec) : 20=0.84%, 50=2.63%, 100=3.11%, 250=4.07%, 500=8.38% 00:14:56.002 lat (msec) : 750=27.66%, 1000=18.32%, 2000=30.66%, >=2000=4.31% 00:14:56.002 cpu : usr=0.03%, sys=1.16%, ctx=2016, majf=0, minf=32769 00:14:56.002 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.5% 00:14:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.002 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:56.002 issued rwts: total=835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.002 job5: (groupid=0, jobs=1): err= 0: pid=3784966: Fri Dec 6 16:27:48 2024 00:14:56.002 read: IOPS=67, BW=67.1MiB/s (70.3MB/s)(673MiB/10033msec) 00:14:56.002 slat (usec): min=38, max=2090.4k, avg=14858.35, stdev=102740.76 00:14:56.002 clat (msec): min=30, max=3500, avg=1552.92, stdev=1081.97 00:14:56.002 lat (msec): min=66, max=3517, avg=1567.78, stdev=1084.34 00:14:56.002 clat percentiles (msec): 00:14:56.002 | 1.00th=[ 180], 5.00th=[ 384], 10.00th=[ 397], 20.00th=[ 464], 00:14:56.002 | 30.00th=[ 869], 40.00th=[ 1053], 50.00th=[ 1217], 60.00th=[ 1385], 00:14:56.002 | 70.00th=[ 1821], 80.00th=[ 3004], 90.00th=[ 3440], 95.00th=[ 3473], 00:14:56.002 | 99.00th=[ 3507], 99.50th=[ 3507], 99.90th=[ 3507], 99.95th=[ 3507], 00:14:56.002 | 99.99th=[ 3507] 00:14:56.002 bw ( KiB/s): min= 4096, max=313344, per=2.79%, avg=93178.83, stdev=90594.88, samples=12 00:14:56.002 iops : min= 4, max= 306, avg=90.92, stdev=88.53, samples=12 00:14:56.002 lat (msec) : 50=0.15%, 100=0.30%, 250=0.89%, 500=20.21%, 750=4.61% 00:14:56.002 lat (msec) : 1000=11.89%, 2000=33.73%, >=2000=28.23% 00:14:56.002 cpu : usr=0.01%, sys=1.25%, ctx=2171, majf=0, minf=32769 00:14:56.002 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:14:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.002 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:56.002 issued rwts: total=673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.002 job5: (groupid=0, jobs=1): err= 0: pid=3784967: Fri Dec 6 16:27:48 2024 00:14:56.002 read: IOPS=55, BW=55.8MiB/s (58.5MB/s)(559MiB/10024msec) 00:14:56.002 slat (usec): min=31, max=2089.2k, avg=17890.85, stdev=114533.75 00:14:56.002 clat (msec): min=20, max=3516, avg=1664.86, stdev=987.57 00:14:56.002 lat (msec): min=24, max=3532, avg=1682.75, stdev=991.98 00:14:56.002 clat percentiles (msec): 00:14:56.002 | 1.00th=[ 32], 5.00th=[ 309], 10.00th=[ 493], 20.00th=[ 961], 00:14:56.002 | 30.00th=[ 1217], 40.00th=[ 1301], 50.00th=[ 1401], 60.00th=[ 1586], 00:14:56.002 | 70.00th=[ 1754], 80.00th=[ 3071], 90.00th=[ 3406], 95.00th=[ 3473], 00:14:56.002 | 99.00th=[ 3507], 99.50th=[ 3507], 99.90th=[ 3507], 99.95th=[ 3507], 00:14:56.002 | 99.99th=[ 3507] 00:14:56.002 bw ( KiB/s): min=36864, max=143360, per=2.65%, avg=88473.60, stdev=32565.42, samples=10 00:14:56.002 iops : min= 36, max= 140, avg=86.40, stdev=31.80, samples=10 00:14:56.002 lat (msec) : 50=1.25%, 100=1.07%, 250=1.97%, 500=8.23%, 750=2.86% 00:14:56.002 lat (msec) : 1000=6.98%, 2000=54.92%, >=2000=22.72% 00:14:56.002 cpu : usr=0.06%, sys=0.85%, ctx=1690, majf=0, minf=32769 00:14:56.002 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:14:56.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.002 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:14:56.002 issued rwts: total=559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:56.002 00:14:56.002 Run status group 0 (all jobs): 00:14:56.002 READ: bw=3262MiB/s (3420MB/s), 1380KiB/s-197MiB/s (1413kB/s-206MB/s), io=40.2GiB (43.1GB), run=10010-12613msec 00:14:56.002 00:14:56.002 Disk stats (read/write): 00:14:56.002 nvme0n1: ios=55600/0, merge=0/0, ticks=7646090/0, in_queue=7646090, util=98.76% 00:14:56.002 nvme1n1: ios=12413/0, merge=0/0, ticks=8639028/0, in_queue=8639028, util=99.03% 00:14:56.002 nvme2n1: ios=45744/0, merge=0/0, ticks=7051978/0, in_queue=7051978, util=99.05% 00:14:56.002 nvme3n1: ios=43569/0, merge=0/0, ticks=7936929/0, in_queue=7936929, util=99.12% 00:14:56.002 nvme4n1: ios=66727/0, merge=0/0, ticks=7372332/0, in_queue=7372332, util=99.05% 00:14:56.002 nvme5n1: ios=104253/0, merge=0/0, ticks=7977370/0, in_queue=7977370, util=98.81% 00:14:56.002 16:27:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:14:56.002 16:27:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:14:56.002 16:27:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:14:56.002 16:27:48 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:14:56.002 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:14:56.002 16:27:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:14:56.261 16:27:50 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:14:57.196 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:14:57.196 16:27:51 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:14:58.130 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:14:58.130 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:14:58.130 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:14:58.130 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:58.130 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:14:58.130 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:58.130 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:14:58.130 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:14:58.130 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:58.130 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.130 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:58.388 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.388 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:14:58.388 16:27:52 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:14:59.323 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:14:59.323 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:14:59.323 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:14:59.323 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:59.323 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:14:59.323 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:59.324 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:14:59.324 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:14:59.324 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:59.324 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.324 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:59.324 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.324 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:14:59.324 16:27:53 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:15:00.258 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:00.258 rmmod nvme_rdma 00:15:00.258 rmmod nvme_fabrics 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 3783347 ']' 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 3783347 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 3783347 ']' 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 3783347 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3783347 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3783347' 00:15:00.258 killing process with pid 3783347 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 3783347 00:15:00.258 16:27:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 3783347 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:00.825 00:15:00.825 real 0m32.502s 00:15:00.825 user 1m54.038s 00:15:00.825 sys 0m14.434s 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:00.825 ************************************ 00:15:00.825 END TEST nvmf_srq_overwhelm 00:15:00.825 ************************************ 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:00.825 ************************************ 00:15:00.825 START TEST nvmf_shutdown 00:15:00.825 ************************************ 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:15:00.825 * Looking for test storage... 00:15:00.825 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:00.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.825 --rc genhtml_branch_coverage=1 00:15:00.825 --rc genhtml_function_coverage=1 00:15:00.825 --rc genhtml_legend=1 00:15:00.825 --rc geninfo_all_blocks=1 00:15:00.825 --rc geninfo_unexecuted_blocks=1 00:15:00.825 00:15:00.825 ' 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:00.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.825 --rc genhtml_branch_coverage=1 00:15:00.825 --rc genhtml_function_coverage=1 00:15:00.825 --rc genhtml_legend=1 00:15:00.825 --rc geninfo_all_blocks=1 00:15:00.825 --rc geninfo_unexecuted_blocks=1 00:15:00.825 00:15:00.825 ' 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:00.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.825 --rc genhtml_branch_coverage=1 00:15:00.825 --rc genhtml_function_coverage=1 00:15:00.825 --rc genhtml_legend=1 00:15:00.825 --rc geninfo_all_blocks=1 00:15:00.825 --rc geninfo_unexecuted_blocks=1 00:15:00.825 00:15:00.825 ' 00:15:00.825 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:00.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.825 --rc genhtml_branch_coverage=1 00:15:00.825 --rc genhtml_function_coverage=1 00:15:00.825 --rc genhtml_legend=1 00:15:00.826 --rc geninfo_all_blocks=1 00:15:00.826 --rc geninfo_unexecuted_blocks=1 00:15:00.826 00:15:00.826 ' 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.826 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.826 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:01.086 ************************************ 00:15:01.086 START TEST nvmf_shutdown_tc1 00:15:01.086 ************************************ 00:15:01.086 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:15:01.086 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:15:01.086 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:01.086 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:01.087 16:27:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:06.354 16:28:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.354 16:28:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:06.354 16:28:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:06.354 16:28:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:06.354 16:28:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:06.354 16:28:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:06.354 16:28:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:06.354 16:28:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:15:06.354 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:06.354 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:06.355 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:06.355 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:06.355 Found net devices under 0000:18:00.0: mlx_0_0 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:06.355 Found net devices under 0000:18:00.1: mlx_0_1 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:06.355 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:06.614 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:06.614 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:06.614 altname enp24s0f0np0 00:15:06.614 altname ens785f0np0 00:15:06.614 inet 192.168.100.8/24 scope global mlx_0_0 00:15:06.614 valid_lft forever preferred_lft forever 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:06.614 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:06.615 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:06.615 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:06.615 altname enp24s0f1np1 00:15:06.615 altname ens785f1np1 00:15:06.615 inet 192.168.100.9/24 scope global mlx_0_1 00:15:06.615 valid_lft forever preferred_lft forever 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:06.615 192.168.100.9' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:06.615 192.168.100.9' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:06.615 192.168.100.9' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3791447 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3791447 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3791447 ']' 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.615 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:06.615 [2024-12-06 16:28:01.254651] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:06.615 [2024-12-06 16:28:01.254695] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.615 [2024-12-06 16:28:01.313049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.872 [2024-12-06 16:28:01.351400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.872 [2024-12-06 16:28:01.351437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.872 [2024-12-06 16:28:01.351444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.872 [2024-12-06 16:28:01.351449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.872 [2024-12-06 16:28:01.351453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.872 [2024-12-06 16:28:01.352676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.872 [2024-12-06 16:28:01.352763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.872 [2024-12-06 16:28:01.352848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.872 [2024-12-06 16:28:01.352849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:06.872 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.872 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:15:06.872 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:06.872 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:06.872 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:06.872 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.872 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:06.872 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.872 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:06.872 [2024-12-06 16:28:01.522646] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd1c3c0/0xd208b0) succeed. 00:15:06.872 [2024-12-06 16:28:01.531180] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd1da50/0xd61f50) succeed. 00:15:07.129 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.129 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:07.129 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:07.129 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:07.129 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:07.129 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:07.129 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:07.129 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:07.129 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.130 16:28:01 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:07.130 Malloc1 00:15:07.130 [2024-12-06 16:28:01.741610] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:07.130 Malloc2 00:15:07.130 Malloc3 00:15:07.130 Malloc4 00:15:07.386 Malloc5 00:15:07.387 Malloc6 00:15:07.387 Malloc7 00:15:07.387 Malloc8 00:15:07.387 Malloc9 00:15:07.387 Malloc10 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3791717 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3791717 /var/tmp/bdevperf.sock 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3791717 ']' 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:07.644 { 00:15:07.644 "params": { 00:15:07.644 "name": "Nvme$subsystem", 00:15:07.644 "trtype": "$TEST_TRANSPORT", 00:15:07.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.644 "adrfam": "ipv4", 00:15:07.644 "trsvcid": "$NVMF_PORT", 00:15:07.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.644 "hdgst": ${hdgst:-false}, 00:15:07.644 "ddgst": ${ddgst:-false} 00:15:07.644 }, 00:15:07.644 "method": "bdev_nvme_attach_controller" 00:15:07.644 } 00:15:07.644 EOF 00:15:07.644 )") 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:07.644 { 00:15:07.644 "params": { 00:15:07.644 "name": "Nvme$subsystem", 00:15:07.644 "trtype": "$TEST_TRANSPORT", 00:15:07.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.644 "adrfam": "ipv4", 00:15:07.644 "trsvcid": "$NVMF_PORT", 00:15:07.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.644 "hdgst": ${hdgst:-false}, 00:15:07.644 "ddgst": ${ddgst:-false} 00:15:07.644 }, 00:15:07.644 "method": "bdev_nvme_attach_controller" 00:15:07.644 } 00:15:07.644 EOF 00:15:07.644 )") 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:07.644 { 00:15:07.644 "params": { 00:15:07.644 "name": "Nvme$subsystem", 00:15:07.644 "trtype": "$TEST_TRANSPORT", 00:15:07.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.644 "adrfam": "ipv4", 00:15:07.644 "trsvcid": "$NVMF_PORT", 00:15:07.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.644 "hdgst": ${hdgst:-false}, 00:15:07.644 "ddgst": ${ddgst:-false} 00:15:07.644 }, 00:15:07.644 "method": "bdev_nvme_attach_controller" 00:15:07.644 } 00:15:07.644 EOF 00:15:07.644 )") 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:07.644 { 00:15:07.644 "params": { 00:15:07.644 "name": "Nvme$subsystem", 00:15:07.644 "trtype": "$TEST_TRANSPORT", 00:15:07.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.644 "adrfam": "ipv4", 00:15:07.644 "trsvcid": "$NVMF_PORT", 00:15:07.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.644 "hdgst": ${hdgst:-false}, 00:15:07.644 "ddgst": ${ddgst:-false} 00:15:07.644 }, 00:15:07.644 "method": "bdev_nvme_attach_controller" 00:15:07.644 } 00:15:07.644 EOF 00:15:07.644 )") 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:07.644 { 00:15:07.644 "params": { 00:15:07.644 "name": "Nvme$subsystem", 00:15:07.644 "trtype": "$TEST_TRANSPORT", 00:15:07.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.644 "adrfam": "ipv4", 00:15:07.644 "trsvcid": "$NVMF_PORT", 00:15:07.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.644 "hdgst": ${hdgst:-false}, 00:15:07.644 "ddgst": ${ddgst:-false} 00:15:07.644 }, 00:15:07.644 "method": "bdev_nvme_attach_controller" 00:15:07.644 } 00:15:07.644 EOF 00:15:07.644 )") 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:07.644 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:07.645 { 00:15:07.645 "params": { 00:15:07.645 "name": "Nvme$subsystem", 00:15:07.645 "trtype": "$TEST_TRANSPORT", 00:15:07.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.645 "adrfam": "ipv4", 00:15:07.645 "trsvcid": "$NVMF_PORT", 00:15:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.645 "hdgst": ${hdgst:-false}, 00:15:07.645 "ddgst": ${ddgst:-false} 00:15:07.645 }, 00:15:07.645 "method": "bdev_nvme_attach_controller" 00:15:07.645 } 00:15:07.645 EOF 00:15:07.645 )") 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:07.645 { 00:15:07.645 "params": { 00:15:07.645 "name": "Nvme$subsystem", 00:15:07.645 "trtype": "$TEST_TRANSPORT", 00:15:07.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.645 "adrfam": "ipv4", 00:15:07.645 "trsvcid": "$NVMF_PORT", 00:15:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.645 "hdgst": ${hdgst:-false}, 00:15:07.645 "ddgst": ${ddgst:-false} 00:15:07.645 }, 00:15:07.645 "method": "bdev_nvme_attach_controller" 00:15:07.645 } 00:15:07.645 EOF 00:15:07.645 )") 00:15:07.645 [2024-12-06 16:28:02.215887] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:07.645 [2024-12-06 16:28:02.215929] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:07.645 { 00:15:07.645 "params": { 00:15:07.645 "name": "Nvme$subsystem", 00:15:07.645 "trtype": "$TEST_TRANSPORT", 00:15:07.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.645 "adrfam": "ipv4", 00:15:07.645 "trsvcid": "$NVMF_PORT", 00:15:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.645 "hdgst": ${hdgst:-false}, 00:15:07.645 "ddgst": ${ddgst:-false} 00:15:07.645 }, 00:15:07.645 "method": "bdev_nvme_attach_controller" 00:15:07.645 } 00:15:07.645 EOF 00:15:07.645 )") 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:07.645 { 00:15:07.645 "params": { 00:15:07.645 "name": "Nvme$subsystem", 00:15:07.645 "trtype": "$TEST_TRANSPORT", 00:15:07.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.645 "adrfam": "ipv4", 00:15:07.645 "trsvcid": "$NVMF_PORT", 00:15:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.645 "hdgst": ${hdgst:-false}, 00:15:07.645 "ddgst": ${ddgst:-false} 00:15:07.645 }, 00:15:07.645 "method": "bdev_nvme_attach_controller" 00:15:07.645 } 00:15:07.645 EOF 00:15:07.645 )") 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:07.645 { 00:15:07.645 "params": { 00:15:07.645 "name": "Nvme$subsystem", 00:15:07.645 "trtype": "$TEST_TRANSPORT", 00:15:07.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.645 "adrfam": "ipv4", 00:15:07.645 "trsvcid": "$NVMF_PORT", 00:15:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.645 "hdgst": ${hdgst:-false}, 00:15:07.645 "ddgst": ${ddgst:-false} 00:15:07.645 }, 00:15:07.645 "method": "bdev_nvme_attach_controller" 00:15:07.645 } 00:15:07.645 EOF 00:15:07.645 )") 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:15:07.645 16:28:02 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:07.645 "params": { 00:15:07.645 "name": "Nvme1", 00:15:07.645 "trtype": "rdma", 00:15:07.645 "traddr": "192.168.100.8", 00:15:07.645 "adrfam": "ipv4", 00:15:07.645 "trsvcid": "4420", 00:15:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.645 "hdgst": false, 00:15:07.645 "ddgst": false 00:15:07.645 }, 00:15:07.645 "method": "bdev_nvme_attach_controller" 00:15:07.645 },{ 00:15:07.645 "params": { 00:15:07.645 "name": "Nvme2", 00:15:07.645 "trtype": "rdma", 00:15:07.645 "traddr": "192.168.100.8", 00:15:07.645 "adrfam": "ipv4", 00:15:07.645 "trsvcid": "4420", 00:15:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:07.645 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:07.645 "hdgst": false, 00:15:07.645 "ddgst": false 00:15:07.645 }, 00:15:07.645 "method": "bdev_nvme_attach_controller" 00:15:07.645 },{ 00:15:07.645 "params": { 00:15:07.645 "name": "Nvme3", 00:15:07.645 "trtype": "rdma", 00:15:07.645 "traddr": "192.168.100.8", 00:15:07.645 "adrfam": "ipv4", 00:15:07.645 "trsvcid": "4420", 00:15:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:07.645 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:07.645 "hdgst": false, 00:15:07.645 "ddgst": false 00:15:07.645 }, 00:15:07.645 "method": "bdev_nvme_attach_controller" 00:15:07.645 },{ 00:15:07.645 "params": { 00:15:07.645 "name": "Nvme4", 00:15:07.645 "trtype": "rdma", 00:15:07.645 "traddr": "192.168.100.8", 00:15:07.645 "adrfam": "ipv4", 00:15:07.645 "trsvcid": "4420", 00:15:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:07.645 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:07.645 "hdgst": false, 00:15:07.645 "ddgst": false 00:15:07.645 }, 00:15:07.645 "method": "bdev_nvme_attach_controller" 00:15:07.645 },{ 00:15:07.645 "params": { 00:15:07.645 "name": "Nvme5", 00:15:07.645 "trtype": "rdma", 00:15:07.645 "traddr": "192.168.100.8", 00:15:07.645 "adrfam": "ipv4", 00:15:07.645 "trsvcid": "4420", 00:15:07.645 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:07.645 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:07.646 "hdgst": false, 00:15:07.646 "ddgst": false 00:15:07.646 }, 00:15:07.646 "method": "bdev_nvme_attach_controller" 00:15:07.646 },{ 00:15:07.646 "params": { 00:15:07.646 "name": "Nvme6", 00:15:07.646 "trtype": "rdma", 00:15:07.646 "traddr": "192.168.100.8", 00:15:07.646 "adrfam": "ipv4", 00:15:07.646 "trsvcid": "4420", 00:15:07.646 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:07.646 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:07.646 "hdgst": false, 00:15:07.646 "ddgst": false 00:15:07.646 }, 00:15:07.646 "method": "bdev_nvme_attach_controller" 00:15:07.646 },{ 00:15:07.646 "params": { 00:15:07.646 "name": "Nvme7", 00:15:07.646 "trtype": "rdma", 00:15:07.646 "traddr": "192.168.100.8", 00:15:07.646 "adrfam": "ipv4", 00:15:07.646 "trsvcid": "4420", 00:15:07.646 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:07.646 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:07.646 "hdgst": false, 00:15:07.646 "ddgst": false 00:15:07.646 }, 00:15:07.646 "method": "bdev_nvme_attach_controller" 00:15:07.646 },{ 00:15:07.646 "params": { 00:15:07.646 "name": "Nvme8", 00:15:07.646 "trtype": "rdma", 00:15:07.646 "traddr": "192.168.100.8", 00:15:07.646 "adrfam": "ipv4", 00:15:07.646 "trsvcid": "4420", 00:15:07.646 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:07.646 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:07.646 "hdgst": false, 00:15:07.646 "ddgst": false 00:15:07.646 }, 00:15:07.646 "method": "bdev_nvme_attach_controller" 00:15:07.646 },{ 00:15:07.646 "params": { 00:15:07.646 "name": "Nvme9", 00:15:07.646 "trtype": "rdma", 00:15:07.646 "traddr": "192.168.100.8", 00:15:07.646 "adrfam": "ipv4", 00:15:07.646 "trsvcid": "4420", 00:15:07.646 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:07.646 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:07.646 "hdgst": false, 00:15:07.646 "ddgst": false 00:15:07.646 }, 00:15:07.646 "method": "bdev_nvme_attach_controller" 00:15:07.646 },{ 00:15:07.646 "params": { 00:15:07.646 "name": "Nvme10", 00:15:07.646 "trtype": "rdma", 00:15:07.646 "traddr": "192.168.100.8", 00:15:07.646 "adrfam": "ipv4", 00:15:07.646 "trsvcid": "4420", 00:15:07.646 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:07.646 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:07.646 "hdgst": false, 00:15:07.646 "ddgst": false 00:15:07.646 }, 00:15:07.646 "method": "bdev_nvme_attach_controller" 00:15:07.646 }' 00:15:07.646 [2024-12-06 16:28:02.276576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.646 [2024-12-06 16:28:02.314237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.575 16:28:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.575 16:28:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:15:08.575 16:28:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:08.575 16:28:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.575 16:28:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:08.575 16:28:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.575 16:28:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3791717 00:15:08.575 16:28:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:15:08.575 16:28:03 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:15:09.505 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3791717 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3791447 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:09.505 { 00:15:09.505 "params": { 00:15:09.505 "name": "Nvme$subsystem", 00:15:09.505 "trtype": "$TEST_TRANSPORT", 00:15:09.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.505 "adrfam": "ipv4", 00:15:09.505 "trsvcid": "$NVMF_PORT", 00:15:09.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.505 "hdgst": ${hdgst:-false}, 00:15:09.505 "ddgst": ${ddgst:-false} 00:15:09.505 }, 00:15:09.505 "method": "bdev_nvme_attach_controller" 00:15:09.505 } 00:15:09.505 EOF 00:15:09.505 )") 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:09.505 { 00:15:09.505 "params": { 00:15:09.505 "name": "Nvme$subsystem", 00:15:09.505 "trtype": "$TEST_TRANSPORT", 00:15:09.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.505 "adrfam": "ipv4", 00:15:09.505 "trsvcid": "$NVMF_PORT", 00:15:09.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.505 "hdgst": ${hdgst:-false}, 00:15:09.505 "ddgst": ${ddgst:-false} 00:15:09.505 }, 00:15:09.505 "method": "bdev_nvme_attach_controller" 00:15:09.505 } 00:15:09.505 EOF 00:15:09.505 )") 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:09.505 { 00:15:09.505 "params": { 00:15:09.505 "name": "Nvme$subsystem", 00:15:09.505 "trtype": "$TEST_TRANSPORT", 00:15:09.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.505 "adrfam": "ipv4", 00:15:09.505 "trsvcid": "$NVMF_PORT", 00:15:09.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.505 "hdgst": ${hdgst:-false}, 00:15:09.505 "ddgst": ${ddgst:-false} 00:15:09.505 }, 00:15:09.505 "method": "bdev_nvme_attach_controller" 00:15:09.505 } 00:15:09.505 EOF 00:15:09.505 )") 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:09.505 { 00:15:09.505 "params": { 00:15:09.505 "name": "Nvme$subsystem", 00:15:09.505 "trtype": "$TEST_TRANSPORT", 00:15:09.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.505 "adrfam": "ipv4", 00:15:09.505 "trsvcid": "$NVMF_PORT", 00:15:09.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.505 "hdgst": ${hdgst:-false}, 00:15:09.505 "ddgst": ${ddgst:-false} 00:15:09.505 }, 00:15:09.505 "method": "bdev_nvme_attach_controller" 00:15:09.505 } 00:15:09.505 EOF 00:15:09.505 )") 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:09.505 { 00:15:09.505 "params": { 00:15:09.505 "name": "Nvme$subsystem", 00:15:09.505 "trtype": "$TEST_TRANSPORT", 00:15:09.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.505 "adrfam": "ipv4", 00:15:09.505 "trsvcid": "$NVMF_PORT", 00:15:09.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.505 "hdgst": ${hdgst:-false}, 00:15:09.505 "ddgst": ${ddgst:-false} 00:15:09.505 }, 00:15:09.505 "method": "bdev_nvme_attach_controller" 00:15:09.505 } 00:15:09.505 EOF 00:15:09.505 )") 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:09.505 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:09.505 { 00:15:09.505 "params": { 00:15:09.505 "name": "Nvme$subsystem", 00:15:09.505 "trtype": "$TEST_TRANSPORT", 00:15:09.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.505 "adrfam": "ipv4", 00:15:09.505 "trsvcid": "$NVMF_PORT", 00:15:09.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.505 "hdgst": ${hdgst:-false}, 00:15:09.505 "ddgst": ${ddgst:-false} 00:15:09.505 }, 00:15:09.505 "method": "bdev_nvme_attach_controller" 00:15:09.506 } 00:15:09.506 EOF 00:15:09.506 )") 00:15:09.506 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:09.506 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:09.506 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:09.506 { 00:15:09.506 "params": { 00:15:09.506 "name": "Nvme$subsystem", 00:15:09.506 "trtype": "$TEST_TRANSPORT", 00:15:09.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.506 "adrfam": "ipv4", 00:15:09.506 "trsvcid": "$NVMF_PORT", 00:15:09.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.506 "hdgst": ${hdgst:-false}, 00:15:09.506 "ddgst": ${ddgst:-false} 00:15:09.506 }, 00:15:09.506 "method": "bdev_nvme_attach_controller" 00:15:09.506 } 00:15:09.506 EOF 00:15:09.506 )") 00:15:09.506 [2024-12-06 16:28:04.228720] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:09.506 [2024-12-06 16:28:04.228766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792085 ] 00:15:09.506 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:09.764 { 00:15:09.764 "params": { 00:15:09.764 "name": "Nvme$subsystem", 00:15:09.764 "trtype": "$TEST_TRANSPORT", 00:15:09.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.764 "adrfam": "ipv4", 00:15:09.764 "trsvcid": "$NVMF_PORT", 00:15:09.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.764 "hdgst": ${hdgst:-false}, 00:15:09.764 "ddgst": ${ddgst:-false} 00:15:09.764 }, 00:15:09.764 "method": "bdev_nvme_attach_controller" 00:15:09.764 } 00:15:09.764 EOF 00:15:09.764 )") 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:09.764 { 00:15:09.764 "params": { 00:15:09.764 "name": "Nvme$subsystem", 00:15:09.764 "trtype": "$TEST_TRANSPORT", 00:15:09.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.764 "adrfam": "ipv4", 00:15:09.764 "trsvcid": "$NVMF_PORT", 00:15:09.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.764 "hdgst": ${hdgst:-false}, 00:15:09.764 "ddgst": ${ddgst:-false} 00:15:09.764 }, 00:15:09.764 "method": "bdev_nvme_attach_controller" 00:15:09.764 } 00:15:09.764 EOF 00:15:09.764 )") 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:09.764 { 00:15:09.764 "params": { 00:15:09.764 "name": "Nvme$subsystem", 00:15:09.764 "trtype": "$TEST_TRANSPORT", 00:15:09.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.764 "adrfam": "ipv4", 00:15:09.764 "trsvcid": "$NVMF_PORT", 00:15:09.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.764 "hdgst": ${hdgst:-false}, 00:15:09.764 "ddgst": ${ddgst:-false} 00:15:09.764 }, 00:15:09.764 "method": "bdev_nvme_attach_controller" 00:15:09.764 } 00:15:09.764 EOF 00:15:09.764 )") 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:15:09.764 16:28:04 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:09.764 "params": { 00:15:09.764 "name": "Nvme1", 00:15:09.764 "trtype": "rdma", 00:15:09.764 "traddr": "192.168.100.8", 00:15:09.764 "adrfam": "ipv4", 00:15:09.764 "trsvcid": "4420", 00:15:09.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.764 "hdgst": false, 00:15:09.764 "ddgst": false 00:15:09.764 }, 00:15:09.764 "method": "bdev_nvme_attach_controller" 00:15:09.764 },{ 00:15:09.764 "params": { 00:15:09.764 "name": "Nvme2", 00:15:09.764 "trtype": "rdma", 00:15:09.764 "traddr": "192.168.100.8", 00:15:09.764 "adrfam": "ipv4", 00:15:09.764 "trsvcid": "4420", 00:15:09.764 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:09.764 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:09.764 "hdgst": false, 00:15:09.764 "ddgst": false 00:15:09.764 }, 00:15:09.764 "method": "bdev_nvme_attach_controller" 00:15:09.764 },{ 00:15:09.764 "params": { 00:15:09.764 "name": "Nvme3", 00:15:09.764 "trtype": "rdma", 00:15:09.764 "traddr": "192.168.100.8", 00:15:09.764 "adrfam": "ipv4", 00:15:09.764 "trsvcid": "4420", 00:15:09.764 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:09.764 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:09.764 "hdgst": false, 00:15:09.764 "ddgst": false 00:15:09.764 }, 00:15:09.764 "method": "bdev_nvme_attach_controller" 00:15:09.764 },{ 00:15:09.764 "params": { 00:15:09.764 "name": "Nvme4", 00:15:09.764 "trtype": "rdma", 00:15:09.764 "traddr": "192.168.100.8", 00:15:09.765 "adrfam": "ipv4", 00:15:09.765 "trsvcid": "4420", 00:15:09.765 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:09.765 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:09.765 "hdgst": false, 00:15:09.765 "ddgst": false 00:15:09.765 }, 00:15:09.765 "method": "bdev_nvme_attach_controller" 00:15:09.765 },{ 00:15:09.765 "params": { 00:15:09.765 "name": "Nvme5", 00:15:09.765 "trtype": "rdma", 00:15:09.765 "traddr": "192.168.100.8", 00:15:09.765 "adrfam": "ipv4", 00:15:09.765 "trsvcid": "4420", 00:15:09.765 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:09.765 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:09.765 "hdgst": false, 00:15:09.765 "ddgst": false 00:15:09.765 }, 00:15:09.765 "method": "bdev_nvme_attach_controller" 00:15:09.765 },{ 00:15:09.765 "params": { 00:15:09.765 "name": "Nvme6", 00:15:09.765 "trtype": "rdma", 00:15:09.765 "traddr": "192.168.100.8", 00:15:09.765 "adrfam": "ipv4", 00:15:09.765 "trsvcid": "4420", 00:15:09.765 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:09.765 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:09.765 "hdgst": false, 00:15:09.765 "ddgst": false 00:15:09.765 }, 00:15:09.765 "method": "bdev_nvme_attach_controller" 00:15:09.765 },{ 00:15:09.765 "params": { 00:15:09.765 "name": "Nvme7", 00:15:09.765 "trtype": "rdma", 00:15:09.765 "traddr": "192.168.100.8", 00:15:09.765 "adrfam": "ipv4", 00:15:09.765 "trsvcid": "4420", 00:15:09.765 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:09.765 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:09.765 "hdgst": false, 00:15:09.765 "ddgst": false 00:15:09.765 }, 00:15:09.765 "method": "bdev_nvme_attach_controller" 00:15:09.765 },{ 00:15:09.765 "params": { 00:15:09.765 "name": "Nvme8", 00:15:09.765 "trtype": "rdma", 00:15:09.765 "traddr": "192.168.100.8", 00:15:09.765 "adrfam": "ipv4", 00:15:09.765 "trsvcid": "4420", 00:15:09.765 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:09.765 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:09.765 "hdgst": false, 00:15:09.765 "ddgst": false 00:15:09.765 }, 00:15:09.765 "method": "bdev_nvme_attach_controller" 00:15:09.765 },{ 00:15:09.765 "params": { 00:15:09.765 "name": "Nvme9", 00:15:09.765 "trtype": "rdma", 00:15:09.765 "traddr": "192.168.100.8", 00:15:09.765 "adrfam": "ipv4", 00:15:09.765 "trsvcid": "4420", 00:15:09.765 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:09.765 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:09.765 "hdgst": false, 00:15:09.765 "ddgst": false 00:15:09.765 }, 00:15:09.765 "method": "bdev_nvme_attach_controller" 00:15:09.765 },{ 00:15:09.765 "params": { 00:15:09.765 "name": "Nvme10", 00:15:09.765 "trtype": "rdma", 00:15:09.765 "traddr": "192.168.100.8", 00:15:09.765 "adrfam": "ipv4", 00:15:09.765 "trsvcid": "4420", 00:15:09.765 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:09.765 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:09.765 "hdgst": false, 00:15:09.765 "ddgst": false 00:15:09.765 }, 00:15:09.765 "method": "bdev_nvme_attach_controller" 00:15:09.765 }' 00:15:09.765 [2024-12-06 16:28:04.287923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.765 [2024-12-06 16:28:04.326360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.698 Running I/O for 1 seconds... 00:15:11.889 3772.00 IOPS, 235.75 MiB/s 00:15:11.889 Latency(us) 00:15:11.889 [2024-12-06T15:28:06.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.889 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:11.889 Verification LBA range: start 0x0 length 0x400 00:15:11.889 Nvme1n1 : 1.17 406.63 25.41 0.00 0.00 155153.13 6359.42 214375.54 00:15:11.889 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:11.889 Verification LBA range: start 0x0 length 0x400 00:15:11.889 Nvme2n1 : 1.17 397.76 24.86 0.00 0.00 156054.88 8398.32 155344.59 00:15:11.889 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:11.889 Verification LBA range: start 0x0 length 0x400 00:15:11.889 Nvme3n1 : 1.17 411.10 25.69 0.00 0.00 149405.70 8592.50 149130.81 00:15:11.889 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:11.889 Verification LBA range: start 0x0 length 0x400 00:15:11.889 Nvme4n1 : 1.17 412.45 25.78 0.00 0.00 146963.35 4247.70 141363.58 00:15:11.889 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:11.889 Verification LBA range: start 0x0 length 0x400 00:15:11.889 Nvme5n1 : 1.17 388.26 24.27 0.00 0.00 153800.24 8495.41 129712.73 00:15:11.889 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:11.889 Verification LBA range: start 0x0 length 0x400 00:15:11.889 Nvme6n1 : 1.17 409.28 25.58 0.00 0.00 144417.92 8543.95 122722.23 00:15:11.889 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:11.889 Verification LBA range: start 0x0 length 0x400 00:15:11.889 Nvme7n1 : 1.17 409.76 25.61 0.00 0.00 142351.88 8738.13 116508.44 00:15:11.889 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:11.889 Verification LBA range: start 0x0 length 0x400 00:15:11.889 Nvme8n1 : 1.17 403.45 25.22 0.00 0.00 142221.02 8835.22 105634.32 00:15:11.889 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:11.889 Verification LBA range: start 0x0 length 0x400 00:15:11.889 Nvme9n1 : 1.17 387.02 24.19 0.00 0.00 145930.46 8592.50 103304.15 00:15:11.889 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:11.889 Verification LBA range: start 0x0 length 0x400 00:15:11.889 Nvme10n1 : 1.17 381.43 23.84 0.00 0.00 147018.98 8932.31 165441.99 00:15:11.889 [2024-12-06T15:28:06.617Z] =================================================================================================================== 00:15:11.889 [2024-12-06T15:28:06.617Z] Total : 4007.14 250.45 0.00 0.00 148297.99 4247.70 214375.54 00:15:11.889 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:15:11.889 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:15:11.889 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:12.147 rmmod nvme_rdma 00:15:12.147 rmmod nvme_fabrics 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3791447 ']' 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3791447 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3791447 ']' 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3791447 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3791447 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3791447' 00:15:12.147 killing process with pid 3791447 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3791447 00:15:12.147 16:28:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3791447 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:12.716 00:15:12.716 real 0m11.578s 00:15:12.716 user 0m27.271s 00:15:12.716 sys 0m5.145s 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:12.716 ************************************ 00:15:12.716 END TEST nvmf_shutdown_tc1 00:15:12.716 ************************************ 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:12.716 ************************************ 00:15:12.716 START TEST nvmf_shutdown_tc2 00:15:12.716 ************************************ 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:12.716 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:12.716 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:12.716 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:12.717 Found net devices under 0000:18:00.0: mlx_0_0 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:12.717 Found net devices under 0000:18:00.1: mlx_0_1 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:12.717 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:12.717 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:12.717 altname enp24s0f0np0 00:15:12.717 altname ens785f0np0 00:15:12.717 inet 192.168.100.8/24 scope global mlx_0_0 00:15:12.717 valid_lft forever preferred_lft forever 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:12.717 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:12.717 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:12.717 altname enp24s0f1np1 00:15:12.717 altname ens785f1np1 00:15:12.717 inet 192.168.100.9/24 scope global mlx_0_1 00:15:12.717 valid_lft forever preferred_lft forever 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:12.717 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:12.718 192.168.100.9' 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:12.718 192.168.100.9' 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:12.718 192.168.100.9' 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3792884 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3792884 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3792884 ']' 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:12.718 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:12.977 [2024-12-06 16:28:07.454534] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:12.977 [2024-12-06 16:28:07.454578] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.977 [2024-12-06 16:28:07.512301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.977 [2024-12-06 16:28:07.551241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.977 [2024-12-06 16:28:07.551277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.977 [2024-12-06 16:28:07.551283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.977 [2024-12-06 16:28:07.551288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.977 [2024-12-06 16:28:07.551293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.977 [2024-12-06 16:28:07.552674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.977 [2024-12-06 16:28:07.552756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.977 [2024-12-06 16:28:07.552864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.977 [2024-12-06 16:28:07.552865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:12.977 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.977 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:15:12.977 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:12.977 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.977 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:12.977 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.977 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:12.977 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.977 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:12.977 [2024-12-06 16:28:07.699079] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x94e3c0/0x9528b0) succeed. 00:15:13.236 [2024-12-06 16:28:07.707368] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x94fa50/0x993f50) succeed. 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.236 16:28:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:13.236 Malloc1 00:15:13.236 [2024-12-06 16:28:07.915843] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:13.236 Malloc2 00:15:13.494 Malloc3 00:15:13.494 Malloc4 00:15:13.494 Malloc5 00:15:13.494 Malloc6 00:15:13.494 Malloc7 00:15:13.494 Malloc8 00:15:13.753 Malloc9 00:15:13.753 Malloc10 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3792958 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3792958 /var/tmp/bdevperf.sock 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3792958 ']' 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.753 { 00:15:13.753 "params": { 00:15:13.753 "name": "Nvme$subsystem", 00:15:13.753 "trtype": "$TEST_TRANSPORT", 00:15:13.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.753 "adrfam": "ipv4", 00:15:13.753 "trsvcid": "$NVMF_PORT", 00:15:13.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.753 "hdgst": ${hdgst:-false}, 00:15:13.753 "ddgst": ${ddgst:-false} 00:15:13.753 }, 00:15:13.753 "method": "bdev_nvme_attach_controller" 00:15:13.753 } 00:15:13.753 EOF 00:15:13.753 )") 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.753 { 00:15:13.753 "params": { 00:15:13.753 "name": "Nvme$subsystem", 00:15:13.753 "trtype": "$TEST_TRANSPORT", 00:15:13.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.753 "adrfam": "ipv4", 00:15:13.753 "trsvcid": "$NVMF_PORT", 00:15:13.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.753 "hdgst": ${hdgst:-false}, 00:15:13.753 "ddgst": ${ddgst:-false} 00:15:13.753 }, 00:15:13.753 "method": "bdev_nvme_attach_controller" 00:15:13.753 } 00:15:13.753 EOF 00:15:13.753 )") 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.753 { 00:15:13.753 "params": { 00:15:13.753 "name": "Nvme$subsystem", 00:15:13.753 "trtype": "$TEST_TRANSPORT", 00:15:13.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.753 "adrfam": "ipv4", 00:15:13.753 "trsvcid": "$NVMF_PORT", 00:15:13.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.753 "hdgst": ${hdgst:-false}, 00:15:13.753 "ddgst": ${ddgst:-false} 00:15:13.753 }, 00:15:13.753 "method": "bdev_nvme_attach_controller" 00:15:13.753 } 00:15:13.753 EOF 00:15:13.753 )") 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.753 { 00:15:13.753 "params": { 00:15:13.753 "name": "Nvme$subsystem", 00:15:13.753 "trtype": "$TEST_TRANSPORT", 00:15:13.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.753 "adrfam": "ipv4", 00:15:13.753 "trsvcid": "$NVMF_PORT", 00:15:13.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.753 "hdgst": ${hdgst:-false}, 00:15:13.753 "ddgst": ${ddgst:-false} 00:15:13.753 }, 00:15:13.753 "method": "bdev_nvme_attach_controller" 00:15:13.753 } 00:15:13.753 EOF 00:15:13.753 )") 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.753 { 00:15:13.753 "params": { 00:15:13.753 "name": "Nvme$subsystem", 00:15:13.753 "trtype": "$TEST_TRANSPORT", 00:15:13.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.753 "adrfam": "ipv4", 00:15:13.753 "trsvcid": "$NVMF_PORT", 00:15:13.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.753 "hdgst": ${hdgst:-false}, 00:15:13.753 "ddgst": ${ddgst:-false} 00:15:13.753 }, 00:15:13.753 "method": "bdev_nvme_attach_controller" 00:15:13.753 } 00:15:13.753 EOF 00:15:13.753 )") 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.753 { 00:15:13.753 "params": { 00:15:13.753 "name": "Nvme$subsystem", 00:15:13.753 "trtype": "$TEST_TRANSPORT", 00:15:13.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.753 "adrfam": "ipv4", 00:15:13.753 "trsvcid": "$NVMF_PORT", 00:15:13.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.753 "hdgst": ${hdgst:-false}, 00:15:13.753 "ddgst": ${ddgst:-false} 00:15:13.753 }, 00:15:13.753 "method": "bdev_nvme_attach_controller" 00:15:13.753 } 00:15:13.753 EOF 00:15:13.753 )") 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.753 { 00:15:13.753 "params": { 00:15:13.753 "name": "Nvme$subsystem", 00:15:13.753 "trtype": "$TEST_TRANSPORT", 00:15:13.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.753 "adrfam": "ipv4", 00:15:13.753 "trsvcid": "$NVMF_PORT", 00:15:13.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.753 "hdgst": ${hdgst:-false}, 00:15:13.753 "ddgst": ${ddgst:-false} 00:15:13.753 }, 00:15:13.753 "method": "bdev_nvme_attach_controller" 00:15:13.753 } 00:15:13.753 EOF 00:15:13.753 )") 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:13.753 [2024-12-06 16:28:08.385852] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:13.753 [2024-12-06 16:28:08.385897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792958 ] 00:15:13.753 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.754 { 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme$subsystem", 00:15:13.754 "trtype": "$TEST_TRANSPORT", 00:15:13.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "$NVMF_PORT", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.754 "hdgst": ${hdgst:-false}, 00:15:13.754 "ddgst": ${ddgst:-false} 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 } 00:15:13.754 EOF 00:15:13.754 )") 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.754 { 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme$subsystem", 00:15:13.754 "trtype": "$TEST_TRANSPORT", 00:15:13.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "$NVMF_PORT", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.754 "hdgst": ${hdgst:-false}, 00:15:13.754 "ddgst": ${ddgst:-false} 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 } 00:15:13.754 EOF 00:15:13.754 )") 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:13.754 { 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme$subsystem", 00:15:13.754 "trtype": "$TEST_TRANSPORT", 00:15:13.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "$NVMF_PORT", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.754 "hdgst": ${hdgst:-false}, 00:15:13.754 "ddgst": ${ddgst:-false} 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 } 00:15:13.754 EOF 00:15:13.754 )") 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:15:13.754 16:28:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme1", 00:15:13.754 "trtype": "rdma", 00:15:13.754 "traddr": "192.168.100.8", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "4420", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.754 "hdgst": false, 00:15:13.754 "ddgst": false 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 },{ 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme2", 00:15:13.754 "trtype": "rdma", 00:15:13.754 "traddr": "192.168.100.8", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "4420", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:13.754 "hdgst": false, 00:15:13.754 "ddgst": false 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 },{ 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme3", 00:15:13.754 "trtype": "rdma", 00:15:13.754 "traddr": "192.168.100.8", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "4420", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:13.754 "hdgst": false, 00:15:13.754 "ddgst": false 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 },{ 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme4", 00:15:13.754 "trtype": "rdma", 00:15:13.754 "traddr": "192.168.100.8", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "4420", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:13.754 "hdgst": false, 00:15:13.754 "ddgst": false 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 },{ 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme5", 00:15:13.754 "trtype": "rdma", 00:15:13.754 "traddr": "192.168.100.8", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "4420", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:13.754 "hdgst": false, 00:15:13.754 "ddgst": false 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 },{ 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme6", 00:15:13.754 "trtype": "rdma", 00:15:13.754 "traddr": "192.168.100.8", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "4420", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:13.754 "hdgst": false, 00:15:13.754 "ddgst": false 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 },{ 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme7", 00:15:13.754 "trtype": "rdma", 00:15:13.754 "traddr": "192.168.100.8", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "4420", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:13.754 "hdgst": false, 00:15:13.754 "ddgst": false 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 },{ 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme8", 00:15:13.754 "trtype": "rdma", 00:15:13.754 "traddr": "192.168.100.8", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "4420", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:13.754 "hdgst": false, 00:15:13.754 "ddgst": false 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 },{ 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme9", 00:15:13.754 "trtype": "rdma", 00:15:13.754 "traddr": "192.168.100.8", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "4420", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:13.754 "hdgst": false, 00:15:13.754 "ddgst": false 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 },{ 00:15:13.754 "params": { 00:15:13.754 "name": "Nvme10", 00:15:13.754 "trtype": "rdma", 00:15:13.754 "traddr": "192.168.100.8", 00:15:13.754 "adrfam": "ipv4", 00:15:13.754 "trsvcid": "4420", 00:15:13.754 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:13.754 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:13.754 "hdgst": false, 00:15:13.754 "ddgst": false 00:15:13.754 }, 00:15:13.754 "method": "bdev_nvme_attach_controller" 00:15:13.754 }' 00:15:13.754 [2024-12-06 16:28:08.444463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.013 [2024-12-06 16:28:08.482272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.948 Running I/O for 10 seconds... 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:15:14.948 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:15:15.206 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:15:15.206 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:15:15.206 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:15:15.206 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:15:15.206 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.206 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=147 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 147 -ge 100 ']' 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3792958 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3792958 ']' 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3792958 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.465 16:28:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3792958 00:15:15.465 16:28:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.465 16:28:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.465 16:28:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3792958' 00:15:15.465 killing process with pid 3792958 00:15:15.465 16:28:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3792958 00:15:15.465 16:28:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3792958 00:15:15.465 Received shutdown signal, test time was about 0.751458 seconds 00:15:15.465 00:15:15.465 Latency(us) 00:15:15.465 [2024-12-06T15:28:10.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.465 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.465 Verification LBA range: start 0x0 length 0x400 00:15:15.465 Nvme1n1 : 0.74 367.75 22.98 0.00 0.00 170671.66 7767.23 198064.36 00:15:15.465 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.465 Verification LBA range: start 0x0 length 0x400 00:15:15.465 Nvme2n1 : 0.74 367.33 22.96 0.00 0.00 167482.85 7670.14 187190.23 00:15:15.465 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.465 Verification LBA range: start 0x0 length 0x400 00:15:15.465 Nvme3n1 : 0.74 381.66 23.85 0.00 0.00 158267.00 7718.68 179423.00 00:15:15.465 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.465 Verification LBA range: start 0x0 length 0x400 00:15:15.465 Nvme4n1 : 0.74 428.21 26.76 0.00 0.00 138316.07 4126.34 165441.99 00:15:15.465 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.465 Verification LBA range: start 0x0 length 0x400 00:15:15.465 Nvme5n1 : 0.74 430.13 26.88 0.00 0.00 135189.85 8446.86 111848.11 00:15:15.465 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.465 Verification LBA range: start 0x0 length 0x400 00:15:15.465 Nvme6n1 : 0.75 429.51 26.84 0.00 0.00 132050.19 8689.59 104857.60 00:15:15.465 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.465 Verification LBA range: start 0x0 length 0x400 00:15:15.465 Nvme7n1 : 0.75 431.52 26.97 0.00 0.00 128714.32 4223.43 101750.71 00:15:15.465 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.465 Verification LBA range: start 0x0 length 0x400 00:15:15.465 Nvme8n1 : 0.75 428.12 26.76 0.00 0.00 126970.08 9369.22 98643.82 00:15:15.465 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.465 Verification LBA range: start 0x0 length 0x400 00:15:15.465 Nvme9n1 : 0.75 427.18 26.70 0.00 0.00 124649.32 10048.85 93206.76 00:15:15.465 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:15.465 Verification LBA range: start 0x0 length 0x400 00:15:15.465 Nvme10n1 : 0.75 341.02 21.31 0.00 0.00 153094.26 8543.95 203501.42 00:15:15.465 [2024-12-06T15:28:10.193Z] =================================================================================================================== 00:15:15.465 [2024-12-06T15:28:10.193Z] Total : 4032.43 252.03 0.00 0.00 142332.75 4126.34 203501.42 00:15:15.724 16:28:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3792884 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:16.655 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:16.655 rmmod nvme_rdma 00:15:16.655 rmmod nvme_fabrics 00:15:16.919 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:16.919 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:15:16.919 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:15:16.919 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3792884 ']' 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3792884 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3792884 ']' 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3792884 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3792884 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3792884' 00:15:16.920 killing process with pid 3792884 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3792884 00:15:16.920 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3792884 00:15:17.177 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:17.177 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:17.177 00:15:17.177 real 0m4.673s 00:15:17.177 user 0m18.878s 00:15:17.177 sys 0m0.946s 00:15:17.177 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.177 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:17.177 ************************************ 00:15:17.177 END TEST nvmf_shutdown_tc2 00:15:17.177 ************************************ 00:15:17.437 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:15:17.437 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:17.437 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:17.437 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:17.437 ************************************ 00:15:17.437 START TEST nvmf_shutdown_tc3 00:15:17.437 ************************************ 00:15:17.437 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:15:17.437 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:15:17.437 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:17.437 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:17.438 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:17.438 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.438 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:17.438 Found net devices under 0000:18:00.0: mlx_0_0 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:17.439 Found net devices under 0000:18:00.1: mlx_0_1 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:17.439 16:28:11 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:17.439 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.439 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:17.439 altname enp24s0f0np0 00:15:17.439 altname ens785f0np0 00:15:17.439 inet 192.168.100.8/24 scope global mlx_0_0 00:15:17.439 valid_lft forever preferred_lft forever 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:17.439 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.439 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:17.439 altname enp24s0f1np1 00:15:17.439 altname ens785f1np1 00:15:17.439 inet 192.168.100.9/24 scope global mlx_0_1 00:15:17.439 valid_lft forever preferred_lft forever 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.439 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:17.440 192.168.100.9' 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:17.440 192.168.100.9' 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:17.440 192.168.100.9' 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:17.440 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3793850 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3793850 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3793850 ']' 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:17.699 [2024-12-06 16:28:12.234772] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:17.699 [2024-12-06 16:28:12.234814] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.699 [2024-12-06 16:28:12.293701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.699 [2024-12-06 16:28:12.333436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.699 [2024-12-06 16:28:12.333485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.699 [2024-12-06 16:28:12.333491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.699 [2024-12-06 16:28:12.333497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.699 [2024-12-06 16:28:12.333501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.699 [2024-12-06 16:28:12.334888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.699 [2024-12-06 16:28:12.334970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.699 [2024-12-06 16:28:12.335081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.699 [2024-12-06 16:28:12.335082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.699 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 [2024-12-06 16:28:12.486723] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b153c0/0x1b198b0) succeed. 00:15:17.958 [2024-12-06 16:28:12.494887] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b16a50/0x1b5af50) succeed. 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.958 16:28:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 Malloc1 00:15:18.216 [2024-12-06 16:28:12.702677] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:18.216 Malloc2 00:15:18.216 Malloc3 00:15:18.216 Malloc4 00:15:18.216 Malloc5 00:15:18.216 Malloc6 00:15:18.475 Malloc7 00:15:18.475 Malloc8 00:15:18.475 Malloc9 00:15:18.475 Malloc10 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3794063 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3794063 /var/tmp/bdevperf.sock 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3794063 ']' 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.475 { 00:15:18.475 "params": { 00:15:18.475 "name": "Nvme$subsystem", 00:15:18.475 "trtype": "$TEST_TRANSPORT", 00:15:18.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.475 "adrfam": "ipv4", 00:15:18.475 "trsvcid": "$NVMF_PORT", 00:15:18.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.475 "hdgst": ${hdgst:-false}, 00:15:18.475 "ddgst": ${ddgst:-false} 00:15:18.475 }, 00:15:18.475 "method": "bdev_nvme_attach_controller" 00:15:18.475 } 00:15:18.475 EOF 00:15:18.475 )") 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.475 { 00:15:18.475 "params": { 00:15:18.475 "name": "Nvme$subsystem", 00:15:18.475 "trtype": "$TEST_TRANSPORT", 00:15:18.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.475 "adrfam": "ipv4", 00:15:18.475 "trsvcid": "$NVMF_PORT", 00:15:18.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.475 "hdgst": ${hdgst:-false}, 00:15:18.475 "ddgst": ${ddgst:-false} 00:15:18.475 }, 00:15:18.475 "method": "bdev_nvme_attach_controller" 00:15:18.475 } 00:15:18.475 EOF 00:15:18.475 )") 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.475 { 00:15:18.475 "params": { 00:15:18.475 "name": "Nvme$subsystem", 00:15:18.475 "trtype": "$TEST_TRANSPORT", 00:15:18.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.475 "adrfam": "ipv4", 00:15:18.475 "trsvcid": "$NVMF_PORT", 00:15:18.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.475 "hdgst": ${hdgst:-false}, 00:15:18.475 "ddgst": ${ddgst:-false} 00:15:18.475 }, 00:15:18.475 "method": "bdev_nvme_attach_controller" 00:15:18.475 } 00:15:18.475 EOF 00:15:18.475 )") 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.475 { 00:15:18.475 "params": { 00:15:18.475 "name": "Nvme$subsystem", 00:15:18.475 "trtype": "$TEST_TRANSPORT", 00:15:18.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.475 "adrfam": "ipv4", 00:15:18.475 "trsvcid": "$NVMF_PORT", 00:15:18.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.475 "hdgst": ${hdgst:-false}, 00:15:18.475 "ddgst": ${ddgst:-false} 00:15:18.475 }, 00:15:18.475 "method": "bdev_nvme_attach_controller" 00:15:18.475 } 00:15:18.475 EOF 00:15:18.475 )") 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.475 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.475 { 00:15:18.475 "params": { 00:15:18.475 "name": "Nvme$subsystem", 00:15:18.475 "trtype": "$TEST_TRANSPORT", 00:15:18.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.475 "adrfam": "ipv4", 00:15:18.475 "trsvcid": "$NVMF_PORT", 00:15:18.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.475 "hdgst": ${hdgst:-false}, 00:15:18.475 "ddgst": ${ddgst:-false} 00:15:18.476 }, 00:15:18.476 "method": "bdev_nvme_attach_controller" 00:15:18.476 } 00:15:18.476 EOF 00:15:18.476 )") 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.476 { 00:15:18.476 "params": { 00:15:18.476 "name": "Nvme$subsystem", 00:15:18.476 "trtype": "$TEST_TRANSPORT", 00:15:18.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.476 "adrfam": "ipv4", 00:15:18.476 "trsvcid": "$NVMF_PORT", 00:15:18.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.476 "hdgst": ${hdgst:-false}, 00:15:18.476 "ddgst": ${ddgst:-false} 00:15:18.476 }, 00:15:18.476 "method": "bdev_nvme_attach_controller" 00:15:18.476 } 00:15:18.476 EOF 00:15:18.476 )") 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:18.476 [2024-12-06 16:28:13.174112] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:18.476 [2024-12-06 16:28:13.174174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3794063 ] 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.476 { 00:15:18.476 "params": { 00:15:18.476 "name": "Nvme$subsystem", 00:15:18.476 "trtype": "$TEST_TRANSPORT", 00:15:18.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.476 "adrfam": "ipv4", 00:15:18.476 "trsvcid": "$NVMF_PORT", 00:15:18.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.476 "hdgst": ${hdgst:-false}, 00:15:18.476 "ddgst": ${ddgst:-false} 00:15:18.476 }, 00:15:18.476 "method": "bdev_nvme_attach_controller" 00:15:18.476 } 00:15:18.476 EOF 00:15:18.476 )") 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.476 { 00:15:18.476 "params": { 00:15:18.476 "name": "Nvme$subsystem", 00:15:18.476 "trtype": "$TEST_TRANSPORT", 00:15:18.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.476 "adrfam": "ipv4", 00:15:18.476 "trsvcid": "$NVMF_PORT", 00:15:18.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.476 "hdgst": ${hdgst:-false}, 00:15:18.476 "ddgst": ${ddgst:-false} 00:15:18.476 }, 00:15:18.476 "method": "bdev_nvme_attach_controller" 00:15:18.476 } 00:15:18.476 EOF 00:15:18.476 )") 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.476 { 00:15:18.476 "params": { 00:15:18.476 "name": "Nvme$subsystem", 00:15:18.476 "trtype": "$TEST_TRANSPORT", 00:15:18.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.476 "adrfam": "ipv4", 00:15:18.476 "trsvcid": "$NVMF_PORT", 00:15:18.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.476 "hdgst": ${hdgst:-false}, 00:15:18.476 "ddgst": ${ddgst:-false} 00:15:18.476 }, 00:15:18.476 "method": "bdev_nvme_attach_controller" 00:15:18.476 } 00:15:18.476 EOF 00:15:18.476 )") 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:18.476 { 00:15:18.476 "params": { 00:15:18.476 "name": "Nvme$subsystem", 00:15:18.476 "trtype": "$TEST_TRANSPORT", 00:15:18.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.476 "adrfam": "ipv4", 00:15:18.476 "trsvcid": "$NVMF_PORT", 00:15:18.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.476 "hdgst": ${hdgst:-false}, 00:15:18.476 "ddgst": ${ddgst:-false} 00:15:18.476 }, 00:15:18.476 "method": "bdev_nvme_attach_controller" 00:15:18.476 } 00:15:18.476 EOF 00:15:18.476 )") 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:18.476 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:15:18.735 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:15:18.735 16:28:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:18.735 "params": { 00:15:18.735 "name": "Nvme1", 00:15:18.735 "trtype": "rdma", 00:15:18.735 "traddr": "192.168.100.8", 00:15:18.735 "adrfam": "ipv4", 00:15:18.735 "trsvcid": "4420", 00:15:18.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.735 "hdgst": false, 00:15:18.735 "ddgst": false 00:15:18.735 }, 00:15:18.735 "method": "bdev_nvme_attach_controller" 00:15:18.735 },{ 00:15:18.735 "params": { 00:15:18.735 "name": "Nvme2", 00:15:18.735 "trtype": "rdma", 00:15:18.735 "traddr": "192.168.100.8", 00:15:18.735 "adrfam": "ipv4", 00:15:18.735 "trsvcid": "4420", 00:15:18.735 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:18.735 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:18.735 "hdgst": false, 00:15:18.735 "ddgst": false 00:15:18.735 }, 00:15:18.735 "method": "bdev_nvme_attach_controller" 00:15:18.735 },{ 00:15:18.735 "params": { 00:15:18.735 "name": "Nvme3", 00:15:18.735 "trtype": "rdma", 00:15:18.735 "traddr": "192.168.100.8", 00:15:18.735 "adrfam": "ipv4", 00:15:18.735 "trsvcid": "4420", 00:15:18.735 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:18.735 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:18.735 "hdgst": false, 00:15:18.735 "ddgst": false 00:15:18.735 }, 00:15:18.735 "method": "bdev_nvme_attach_controller" 00:15:18.735 },{ 00:15:18.735 "params": { 00:15:18.735 "name": "Nvme4", 00:15:18.735 "trtype": "rdma", 00:15:18.735 "traddr": "192.168.100.8", 00:15:18.735 "adrfam": "ipv4", 00:15:18.735 "trsvcid": "4420", 00:15:18.735 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:18.735 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:18.735 "hdgst": false, 00:15:18.735 "ddgst": false 00:15:18.735 }, 00:15:18.735 "method": "bdev_nvme_attach_controller" 00:15:18.735 },{ 00:15:18.735 "params": { 00:15:18.735 "name": "Nvme5", 00:15:18.735 "trtype": "rdma", 00:15:18.735 "traddr": "192.168.100.8", 00:15:18.735 "adrfam": "ipv4", 00:15:18.735 "trsvcid": "4420", 00:15:18.735 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:18.735 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:18.735 "hdgst": false, 00:15:18.735 "ddgst": false 00:15:18.735 }, 00:15:18.735 "method": "bdev_nvme_attach_controller" 00:15:18.735 },{ 00:15:18.735 "params": { 00:15:18.735 "name": "Nvme6", 00:15:18.735 "trtype": "rdma", 00:15:18.735 "traddr": "192.168.100.8", 00:15:18.735 "adrfam": "ipv4", 00:15:18.735 "trsvcid": "4420", 00:15:18.735 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:18.735 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:18.735 "hdgst": false, 00:15:18.735 "ddgst": false 00:15:18.735 }, 00:15:18.735 "method": "bdev_nvme_attach_controller" 00:15:18.735 },{ 00:15:18.735 "params": { 00:15:18.735 "name": "Nvme7", 00:15:18.735 "trtype": "rdma", 00:15:18.735 "traddr": "192.168.100.8", 00:15:18.735 "adrfam": "ipv4", 00:15:18.735 "trsvcid": "4420", 00:15:18.735 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:18.735 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:18.735 "hdgst": false, 00:15:18.735 "ddgst": false 00:15:18.735 }, 00:15:18.735 "method": "bdev_nvme_attach_controller" 00:15:18.735 },{ 00:15:18.735 "params": { 00:15:18.735 "name": "Nvme8", 00:15:18.735 "trtype": "rdma", 00:15:18.735 "traddr": "192.168.100.8", 00:15:18.735 "adrfam": "ipv4", 00:15:18.735 "trsvcid": "4420", 00:15:18.735 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:18.735 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:18.735 "hdgst": false, 00:15:18.735 "ddgst": false 00:15:18.735 }, 00:15:18.735 "method": "bdev_nvme_attach_controller" 00:15:18.735 },{ 00:15:18.735 "params": { 00:15:18.735 "name": "Nvme9", 00:15:18.735 "trtype": "rdma", 00:15:18.735 "traddr": "192.168.100.8", 00:15:18.735 "adrfam": "ipv4", 00:15:18.735 "trsvcid": "4420", 00:15:18.735 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:18.735 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:18.735 "hdgst": false, 00:15:18.735 "ddgst": false 00:15:18.735 }, 00:15:18.735 "method": "bdev_nvme_attach_controller" 00:15:18.735 },{ 00:15:18.735 "params": { 00:15:18.735 "name": "Nvme10", 00:15:18.735 "trtype": "rdma", 00:15:18.735 "traddr": "192.168.100.8", 00:15:18.735 "adrfam": "ipv4", 00:15:18.735 "trsvcid": "4420", 00:15:18.735 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:18.735 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:18.735 "hdgst": false, 00:15:18.735 "ddgst": false 00:15:18.735 }, 00:15:18.735 "method": "bdev_nvme_attach_controller" 00:15:18.735 }' 00:15:18.735 [2024-12-06 16:28:13.234640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.735 [2024-12-06 16:28:13.273175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.670 Running I/O for 10 seconds... 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.670 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:19.928 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.928 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=19 00:15:19.928 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 19 -ge 100 ']' 00:15:19.928 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=171 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 171 -ge 100 ']' 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3793850 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3793850 ']' 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3793850 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3793850 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3793850' 00:15:20.186 killing process with pid 3793850 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3793850 00:15:20.186 16:28:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3793850 00:15:20.703 2701.00 IOPS, 168.81 MiB/s [2024-12-06T15:28:15.431Z] 16:28:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:15:21.271 [2024-12-06 16:28:15.880981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.271 [2024-12-06 16:28:15.881016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:15:21.271 [2024-12-06 16:28:15.881027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.271 [2024-12-06 16:28:15.881033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:15:21.271 [2024-12-06 16:28:15.881056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.271 [2024-12-06 16:28:15.881063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:15:21.271 [2024-12-06 16:28:15.881069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.271 [2024-12-06 16:28:15.881075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7210 p:0 m:0 dnr:0 00:15:21.271 [2024-12-06 16:28:15.883638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.271 [2024-12-06 16:28:15.883680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:15:21.271 [2024-12-06 16:28:15.883729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.271 [2024-12-06 16:28:15.883754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.271 [2024-12-06 16:28:15.883778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.271 [2024-12-06 16:28:15.883799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.271 [2024-12-06 16:28:15.883822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.271 [2024-12-06 16:28:15.883842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.271 [2024-12-06 16:28:15.883865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.271 [2024-12-06 16:28:15.883886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.271 [2024-12-06 16:28:15.885969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.271 [2024-12-06 16:28:15.886003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:15:21.271 [2024-12-06 16:28:15.886043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.271 [2024-12-06 16:28:15.886066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.271 [2024-12-06 16:28:15.886090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.271 [2024-12-06 16:28:15.886111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.271 [2024-12-06 16:28:15.886133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.886153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.886185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.886206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.888556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.272 [2024-12-06 16:28:15.888588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:15:21.272 [2024-12-06 16:28:15.888626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.888648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.888672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.888692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.888714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.888734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.888757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.888777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.891261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.272 [2024-12-06 16:28:15.891295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:15:21.272 [2024-12-06 16:28:15.891337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.891361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.891394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.891416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.891438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.891458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.891480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.891501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.893769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.272 [2024-12-06 16:28:15.893800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:21.272 [2024-12-06 16:28:15.893836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.893874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.893897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.893917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.893939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.893959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.893982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.894002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.896424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.272 [2024-12-06 16:28:15.896456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:15:21.272 [2024-12-06 16:28:15.896473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.896481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.896490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.896498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.896506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.896513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.896528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.896535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.898364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.272 [2024-12-06 16:28:15.898405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:15:21.272 [2024-12-06 16:28:15.898447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.898471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.898493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.898514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.898536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.898555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.898584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.898605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.900497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.272 [2024-12-06 16:28:15.900529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:15:21.272 [2024-12-06 16:28:15.900565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.900588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.900611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.900631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.900653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.900673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.900695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.900715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.903034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.272 [2024-12-06 16:28:15.903065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:15:21.272 [2024-12-06 16:28:15.903105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.903127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.903150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.903170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.903192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.903213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.903235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.272 [2024-12-06 16:28:15.903255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:1 sqhd:f990 p:0 m:0 dnr:0 00:15:21.272 [2024-12-06 16:28:15.905520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:21.272 [2024-12-06 16:28:15.905535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:15:21.272 [2024-12-06 16:28:15.907935] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:15:21.272 [2024-12-06 16:28:15.910459] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:15:21.272 [2024-12-06 16:28:15.913101] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:15:21.272 [2024-12-06 16:28:15.915507] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:15:21.272 [2024-12-06 16:28:15.917662] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:15:21.273 [2024-12-06 16:28:15.919608] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:15:21.273 [2024-12-06 16:28:15.921634] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:15:21.273 [2024-12-06 16:28:15.923442] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:15:21.273 [2024-12-06 16:28:15.925051] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:15:21.273 [2024-12-06 16:28:15.925128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001965f980 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f900 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f880 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f780 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001960f700 len:0x10000 key:0x183800 00:15:21.273 [2024-12-06 16:28:15.925466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194af600 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001946f400 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001945f380 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001942f200 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001941f180 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x183c00 00:15:21.273 [2024-12-06 16:28:15.925708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x184200 00:15:21.273 [2024-12-06 16:28:15.925729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff80 len:0x10000 key:0x184200 00:15:21.273 [2024-12-06 16:28:15.925753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x184200 00:15:21.273 [2024-12-06 16:28:15.925776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x184200 00:15:21.273 [2024-12-06 16:28:15.925798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afe00 len:0x10000 key:0x184200 00:15:21.273 [2024-12-06 16:28:15.925821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x184200 00:15:21.273 [2024-12-06 16:28:15.925844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001998fd00 len:0x10000 key:0x184200 00:15:21.273 [2024-12-06 16:28:15.925867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001997fc80 len:0x10000 key:0x184200 00:15:21.273 [2024-12-06 16:28:15.925890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.273 [2024-12-06 16:28:15.925903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001996fc00 len:0x10000 key:0x184200 00:15:21.273 [2024-12-06 16:28:15.925912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.925926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.925936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.925949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001994fb00 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.925958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.925970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001993fa80 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.925979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.925993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001990f900 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bf680 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984f300 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983f280 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001981f180 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001980f100 len:0x10000 key:0x184200 00:15:21.274 [2024-12-06 16:28:15.926398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x181c00 00:15:21.274 [2024-12-06 16:28:15.926419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bdff80 len:0x10000 key:0x181c00 00:15:21.274 [2024-12-06 16:28:15.926441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bcff00 len:0x10000 key:0x181c00 00:15:21.274 [2024-12-06 16:28:15.926461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x181c00 00:15:21.274 [2024-12-06 16:28:15.926483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bafe00 len:0x10000 key:0x181c00 00:15:21.274 [2024-12-06 16:28:15.926504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b9fd80 len:0x10000 key:0x181c00 00:15:21.274 [2024-12-06 16:28:15.926525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b8fd00 len:0x10000 key:0x181c00 00:15:21.274 [2024-12-06 16:28:15.926546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.926559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x183800 00:15:21.274 [2024-12-06 16:28:15.926567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e526c000 sqhd:7210 p:0 m:0 dnr:0 00:15:21.274 [2024-12-06 16:28:15.944810] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.944874] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.944886] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.944897] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.944905] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.944914] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.944924] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.944932] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.944941] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.944950] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.944958] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:15:21.274 [2024-12-06 16:28:15.945510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:21.274 [2024-12-06 16:28:15.945524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:15:21.274 [2024-12-06 16:28:15.945531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:15:21.275 [2024-12-06 16:28:15.945538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:15:21.275 [2024-12-06 16:28:15.945545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:15:21.275 [2024-12-06 16:28:15.945811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:15:21.275 [2024-12-06 16:28:15.945822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:15:21.275 [2024-12-06 16:28:15.945830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:15:21.275 [2024-12-06 16:28:15.945837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:15:21.275 [2024-12-06 16:28:15.945844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:15:21.275 [2024-12-06 16:28:15.969151] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:21.275 [2024-12-06 16:28:15.969207] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:21.275 [2024-12-06 16:28:15.969227] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:15:21.275 [2024-12-06 16:28:15.969344] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:21.275 [2024-12-06 16:28:15.969369] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:21.275 [2024-12-06 16:28:15.969397] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170e3200 00:15:21.275 [2024-12-06 16:28:15.969515] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:21.275 [2024-12-06 16:28:15.969540] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:21.275 [2024-12-06 16:28:15.969556] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170d8d40 00:15:21.275 [2024-12-06 16:28:15.969643] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:21.275 [2024-12-06 16:28:15.969668] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:21.275 [2024-12-06 16:28:15.969697] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170cc0c0 00:15:21.275 [2024-12-06 16:28:15.969768] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:21.275 [2024-12-06 16:28:15.969778] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:21.275 [2024-12-06 16:28:15.969783] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170bf380 00:15:21.275 [2024-12-06 16:28:15.969925] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:21.275 [2024-12-06 16:28:15.969936] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:21.275 [2024-12-06 16:28:15.969942] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017052c40 00:15:21.275 [2024-12-06 16:28:15.970022] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:21.275 [2024-12-06 16:28:15.970032] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:21.275 [2024-12-06 16:28:15.970038] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001707e000 00:15:21.275 [2024-12-06 16:28:15.970127] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:21.275 [2024-12-06 16:28:15.970137] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:21.275 [2024-12-06 16:28:15.970143] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001708d2c0 00:15:21.275 [2024-12-06 16:28:15.970228] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:21.275 [2024-12-06 16:28:15.970238] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:21.275 [2024-12-06 16:28:15.970244] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200017089e00 00:15:21.275 [2024-12-06 16:28:15.970331] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:21.275 [2024-12-06 16:28:15.970341] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:21.275 [2024-12-06 16:28:15.970347] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170cb580 00:15:21.275 task offset: 37888 on job bdev=Nvme1n1 fails 00:15:21.275 00:15:21.275 Latency(us) 00:15:21.275 [2024-12-06T15:28:16.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.275 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:21.275 Job: Nvme1n1 ended in about 1.84 seconds with error 00:15:21.275 Verification LBA range: start 0x0 length 0x400 00:15:21.275 Nvme1n1 : 1.84 147.77 9.24 34.77 0.00 346911.26 6941.96 1037701.88 00:15:21.275 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:21.275 Job: Nvme2n1 ended in about 1.84 seconds with error 00:15:21.275 Verification LBA range: start 0x0 length 0x400 00:15:21.275 Nvme2n1 : 1.84 147.67 9.23 34.75 0.00 344091.60 5801.15 1037701.88 00:15:21.275 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:21.275 Job: Nvme3n1 ended in about 1.84 seconds with error 00:15:21.275 Verification LBA range: start 0x0 length 0x400 00:15:21.275 Nvme3n1 : 1.84 156.28 9.77 34.73 0.00 325888.52 11116.85 1031488.09 00:15:21.275 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:21.275 Job: Nvme4n1 ended in about 1.84 seconds with error 00:15:21.275 Verification LBA range: start 0x0 length 0x400 00:15:21.275 Nvme4n1 : 1.84 156.20 9.76 34.71 0.00 323415.75 5558.42 1031488.09 00:15:21.275 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:21.275 Job: Nvme5n1 ended in about 1.84 seconds with error 00:15:21.275 Verification LBA range: start 0x0 length 0x400 00:15:21.275 Nvme5n1 : 1.84 147.44 9.22 34.69 0.00 336302.55 25826.04 1031488.09 00:15:21.275 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:21.275 Job: Nvme6n1 ended in about 1.85 seconds with error 00:15:21.275 Verification LBA range: start 0x0 length 0x400 00:15:21.275 Nvme6n1 : 1.85 155.50 9.72 34.68 0.00 319392.74 29515.47 1025274.31 00:15:21.275 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:21.275 Job: Nvme7n1 ended in about 1.85 seconds with error 00:15:21.275 Verification LBA range: start 0x0 length 0x400 00:15:21.275 Nvme7n1 : 1.85 155.96 9.75 34.66 0.00 315890.69 37671.06 1025274.31 00:15:21.275 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:21.275 Job: Nvme8n1 ended in about 1.85 seconds with error 00:15:21.275 Verification LBA range: start 0x0 length 0x400 00:15:21.275 Nvme8n1 : 1.85 154.79 9.67 34.64 0.00 315126.49 45244.11 1025274.31 00:15:21.275 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:21.275 Job: Nvme9n1 ended in about 1.85 seconds with error 00:15:21.275 Verification LBA range: start 0x0 length 0x400 00:15:21.275 Nvme9n1 : 1.85 138.49 8.66 34.62 0.00 341876.13 49127.73 1031488.09 00:15:21.275 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:21.275 Job: Nvme10n1 ended in about 1.81 seconds with error 00:15:21.275 Verification LBA range: start 0x0 length 0x400 00:15:21.275 Nvme10n1 : 1.81 106.20 6.64 35.40 0.00 416783.17 57477.50 1062557.01 00:15:21.275 [2024-12-06T15:28:16.003Z] =================================================================================================================== 00:15:21.275 [2024-12-06T15:28:16.003Z] Total : 1466.30 91.64 347.64 0.00 336252.98 5558.42 1062557.01 00:15:21.275 [2024-12-06 16:28:15.992812] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:21.897 16:28:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3794063 00:15:21.897 16:28:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:15:21.897 16:28:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3794063 00:15:21.897 16:28:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:15:21.897 16:28:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.897 16:28:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:15:21.897 16:28:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.897 16:28:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3794063 00:15:22.501 [2024-12-06 16:28:16.973627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:22.502 [2024-12-06 16:28:16.973690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:22.502 [2024-12-06 16:28:16.975286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:22.502 [2024-12-06 16:28:16.975323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:15:22.502 [2024-12-06 16:28:16.976831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:22.502 [2024-12-06 16:28:16.976864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:15:22.502 [2024-12-06 16:28:16.978432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:22.502 [2024-12-06 16:28:16.978463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:15:22.502 [2024-12-06 16:28:16.979879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:22.502 [2024-12-06 16:28:16.979920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:15:22.502 [2024-12-06 16:28:16.981201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:22.502 [2024-12-06 16:28:16.981233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:15:22.502 [2024-12-06 16:28:16.982695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:22.502 [2024-12-06 16:28:16.982726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:15:22.502 [2024-12-06 16:28:16.984346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:22.502 [2024-12-06 16:28:16.984388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:15:22.502 [2024-12-06 16:28:16.985946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:22.502 [2024-12-06 16:28:16.985979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:15:22.502 [2024-12-06 16:28:16.987242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:22.502 [2024-12-06 16:28:16.987273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:15:22.502 [2024-12-06 16:28:16.987293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:15:22.502 [2024-12-06 16:28:16.987313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:15:22.502 [2024-12-06 16:28:16.987334] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:15:22.502 [2024-12-06 16:28:16.987359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:15:22.502 [2024-12-06 16:28:16.987406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:15:22.502 [2024-12-06 16:28:16.987427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:15:22.502 [2024-12-06 16:28:16.987446] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:15:22.502 [2024-12-06 16:28:16.987467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:15:22.502 [2024-12-06 16:28:16.987504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:15:22.502 [2024-12-06 16:28:16.987523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:15:22.502 [2024-12-06 16:28:16.987542] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:15:22.502 [2024-12-06 16:28:16.987563] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:15:22.502 [2024-12-06 16:28:16.987590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:15:22.502 [2024-12-06 16:28:16.987609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:15:22.502 [2024-12-06 16:28:16.987627] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:15:22.502 [2024-12-06 16:28:16.987647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:15:22.502 [2024-12-06 16:28:16.987674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:15:22.502 [2024-12-06 16:28:16.987693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:15:22.502 [2024-12-06 16:28:16.987712] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:15:22.502 [2024-12-06 16:28:16.987733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:15:22.502 [2024-12-06 16:28:16.988000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:15:22.502 [2024-12-06 16:28:16.988026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:15:22.502 [2024-12-06 16:28:16.988052] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:15:22.502 [2024-12-06 16:28:16.988061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:15:22.502 [2024-12-06 16:28:16.988070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:15:22.502 [2024-12-06 16:28:16.988078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:15:22.502 [2024-12-06 16:28:16.988085] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:15:22.502 [2024-12-06 16:28:16.988092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:15:22.502 [2024-12-06 16:28:16.988102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:15:22.502 [2024-12-06 16:28:16.988110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:15:22.502 [2024-12-06 16:28:16.988117] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:15:22.502 [2024-12-06 16:28:16.988124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:15:22.502 [2024-12-06 16:28:16.988135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:15:22.502 [2024-12-06 16:28:16.988142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:15:22.502 [2024-12-06 16:28:16.988150] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:15:22.502 [2024-12-06 16:28:16.988157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:15:22.502 [2024-12-06 16:28:16.988167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:15:22.502 [2024-12-06 16:28:16.988177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:15:22.502 [2024-12-06 16:28:16.988184] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:15:22.502 [2024-12-06 16:28:16.988192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:22.502 rmmod nvme_rdma 00:15:22.502 rmmod nvme_fabrics 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3793850 ']' 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3793850 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3793850 ']' 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3793850 00:15:22.502 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3793850) - No such process 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3793850 is not found' 00:15:22.502 Process with pid 3793850 is not found 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:22.502 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:22.502 00:15:22.502 real 0m5.242s 00:15:22.503 user 0m15.315s 00:15:22.503 sys 0m1.064s 00:15:22.503 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.503 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:22.503 ************************************ 00:15:22.503 END TEST nvmf_shutdown_tc3 00:15:22.503 ************************************ 00:15:22.762 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:15:22.762 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:15:22.762 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:22.762 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.762 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:22.762 ************************************ 00:15:22.762 START TEST nvmf_shutdown_tc4 00:15:22.762 ************************************ 00:15:22.762 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:15:22.762 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:15:22.762 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:22.762 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:22.762 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:22.763 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:22.763 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:22.763 Found net devices under 0000:18:00.0: mlx_0_0 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:22.763 Found net devices under 0000:18:00.1: mlx_0_1 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:22.763 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:22.764 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:22.764 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:22.764 altname enp24s0f0np0 00:15:22.764 altname ens785f0np0 00:15:22.764 inet 192.168.100.8/24 scope global mlx_0_0 00:15:22.764 valid_lft forever preferred_lft forever 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:22.764 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:22.764 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:22.764 altname enp24s0f1np1 00:15:22.764 altname ens785f1np1 00:15:22.764 inet 192.168.100.9/24 scope global mlx_0_1 00:15:22.764 valid_lft forever preferred_lft forever 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:22.764 192.168.100.9' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:22.764 192.168.100.9' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:22.764 192.168.100.9' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:22.764 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3794951 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3794951 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3794951 ']' 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.023 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:23.023 [2024-12-06 16:28:17.564238] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:23.023 [2024-12-06 16:28:17.564285] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.023 [2024-12-06 16:28:17.623851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.023 [2024-12-06 16:28:17.666413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.023 [2024-12-06 16:28:17.666448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.023 [2024-12-06 16:28:17.666455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.023 [2024-12-06 16:28:17.666461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.023 [2024-12-06 16:28:17.666466] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.023 [2024-12-06 16:28:17.667920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.023 [2024-12-06 16:28:17.667994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.023 [2024-12-06 16:28:17.668104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.023 [2024-12-06 16:28:17.668104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:23.281 [2024-12-06 16:28:17.826640] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x217e3c0/0x21828b0) succeed. 00:15:23.281 [2024-12-06 16:28:17.834808] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x217fa50/0x21c3f50) succeed. 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:23.281 16:28:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:23.281 16:28:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:23.281 16:28:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.281 16:28:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:23.540 Malloc1 00:15:23.540 [2024-12-06 16:28:18.043770] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:23.540 Malloc2 00:15:23.540 Malloc3 00:15:23.540 Malloc4 00:15:23.540 Malloc5 00:15:23.540 Malloc6 00:15:23.798 Malloc7 00:15:23.798 Malloc8 00:15:23.798 Malloc9 00:15:23.798 Malloc10 00:15:23.798 16:28:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.798 16:28:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:23.798 16:28:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:23.798 16:28:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:23.798 16:28:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3795123 00:15:23.798 16:28:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:15:23.798 16:28:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:15:24.057 [2024-12-06 16:28:18.546490] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3794951 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3794951 ']' 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3794951 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3794951 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3794951' 00:15:29.338 killing process with pid 3794951 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3794951 00:15:29.338 16:28:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3794951 00:15:29.338 NVMe io qpair process completion error 00:15:29.338 NVMe io qpair process completion error 00:15:29.338 NVMe io qpair process completion error 00:15:29.338 starting I/O failed: -6 00:15:29.338 starting I/O failed: -6 00:15:29.338 NVMe io qpair process completion error 00:15:29.338 NVMe io qpair process completion error 00:15:29.338 NVMe io qpair process completion error 00:15:29.338 NVMe io qpair process completion error 00:15:29.596 16:28:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 [2024-12-06 16:28:24.616426] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.162 Write completed with error (sct=0, sc=8) 00:15:30.162 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 [2024-12-06 16:28:24.627453] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 [2024-12-06 16:28:24.637559] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.163 Write completed with error (sct=0, sc=8) 00:15:30.163 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 starting I/O failed: -6 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 [2024-12-06 16:28:24.660643] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.164 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 Write completed with error (sct=0, sc=8) 00:15:30.165 [2024-12-06 16:28:24.670492] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:15:30.165 NVMe io qpair process completion error 00:15:30.165 NVMe io qpair process completion error 00:15:30.165 NVMe io qpair process completion error 00:15:30.165 NVMe io qpair process completion error 00:15:30.424 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3795123 00:15:30.424 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:15:30.424 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3795123 00:15:30.424 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:15:30.424 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.424 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:15:30.424 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:30.424 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3795123 00:15:30.997 [2024-12-06 16:28:25.673369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:30.997 [2024-12-06 16:28:25.673437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 [2024-12-06 16:28:25.675893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:30.997 [2024-12-06 16:28:25.675930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 [2024-12-06 16:28:25.677689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:30.997 [2024-12-06 16:28:25.677724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 [2024-12-06 16:28:25.680173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:30.997 [2024-12-06 16:28:25.680207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 [2024-12-06 16:28:25.682761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:30.997 [2024-12-06 16:28:25.682794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 [2024-12-06 16:28:25.685804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:30.997 [2024-12-06 16:28:25.685836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.997 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 [2024-12-06 16:28:25.688155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:30.998 [2024-12-06 16:28:25.688198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 [2024-12-06 16:28:25.690735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:30.998 [2024-12-06 16:28:25.690767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 [2024-12-06 16:28:25.692853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:30.998 [2024-12-06 16:28:25.692886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 [2024-12-06 16:28:25.695740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:30.998 [2024-12-06 16:28:25.695783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.998 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:30.999 Write completed with error (sct=0, sc=8) 00:15:31.257 Initializing NVMe Controllers 00:15:31.257 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:15:31.257 Controller IO queue size 128, less than required. 00:15:31.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.257 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:15:31.257 Controller IO queue size 128, less than required. 00:15:31.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.257 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:15:31.257 Controller IO queue size 128, less than required. 00:15:31.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.257 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:15:31.257 Controller IO queue size 128, less than required. 00:15:31.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.257 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:15:31.257 Controller IO queue size 128, less than required. 00:15:31.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.257 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:15:31.257 Controller IO queue size 128, less than required. 00:15:31.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.257 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:15:31.257 Controller IO queue size 128, less than required. 00:15:31.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.257 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:15:31.257 Controller IO queue size 128, less than required. 00:15:31.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.258 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:15:31.258 Controller IO queue size 128, less than required. 00:15:31.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.258 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.258 Controller IO queue size 128, less than required. 00:15:31.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:31.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:15:31.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:15:31.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:15:31.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:15:31.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:15:31.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:15:31.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:15:31.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:15:31.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:15:31.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:31.258 Initialization complete. Launching workers. 00:15:31.258 ======================================================== 00:15:31.258 Latency(us) 00:15:31.258 Device Information : IOPS MiB/s Average min max 00:15:31.258 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1653.15 71.03 90473.67 104.94 2201236.80 00:15:31.258 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1635.68 70.28 77397.11 107.77 1184717.07 00:15:31.258 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1634.67 70.24 77523.42 103.85 1194567.90 00:15:31.258 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1663.39 71.47 89948.51 108.19 2191227.71 00:15:31.258 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1696.14 72.88 88317.26 91.77 2058918.15 00:15:31.258 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1624.09 69.79 78076.71 107.98 1214737.78 00:15:31.258 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1661.71 71.40 90183.42 108.19 2171980.59 00:15:31.258 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1628.46 69.97 77709.86 106.00 1190364.46 00:15:31.258 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1621.91 69.69 78347.60 108.08 1213962.57 00:15:31.258 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1609.32 69.15 79045.87 107.72 1237379.07 00:15:31.258 ======================================================== 00:15:31.258 Total : 16428.52 705.91 82770.63 91.77 2201236.80 00:15:31.258 00:15:31.258 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:31.258 rmmod nvme_rdma 00:15:31.258 rmmod nvme_fabrics 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3794951 ']' 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3794951 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3794951 ']' 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3794951 00:15:31.258 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3794951) - No such process 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3794951 is not found' 00:15:31.258 Process with pid 3794951 is not found 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:31.258 00:15:31.258 real 0m8.529s 00:15:31.258 user 0m31.917s 00:15:31.258 sys 0m1.114s 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:31.258 ************************************ 00:15:31.258 END TEST nvmf_shutdown_tc4 00:15:31.258 ************************************ 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:15:31.258 00:15:31.258 real 0m30.491s 00:15:31.258 user 1m33.594s 00:15:31.258 sys 0m8.561s 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:31.258 ************************************ 00:15:31.258 END TEST nvmf_shutdown 00:15:31.258 ************************************ 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.258 ************************************ 00:15:31.258 START TEST nvmf_nsid 00:15:31.258 ************************************ 00:15:31.258 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:15:31.516 * Looking for test storage... 00:15:31.516 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:31.516 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:31.516 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:15:31.516 16:28:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:31.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.516 --rc genhtml_branch_coverage=1 00:15:31.516 --rc genhtml_function_coverage=1 00:15:31.516 --rc genhtml_legend=1 00:15:31.516 --rc geninfo_all_blocks=1 00:15:31.516 --rc geninfo_unexecuted_blocks=1 00:15:31.516 00:15:31.516 ' 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:31.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.516 --rc genhtml_branch_coverage=1 00:15:31.516 --rc genhtml_function_coverage=1 00:15:31.516 --rc genhtml_legend=1 00:15:31.516 --rc geninfo_all_blocks=1 00:15:31.516 --rc geninfo_unexecuted_blocks=1 00:15:31.516 00:15:31.516 ' 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:31.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.516 --rc genhtml_branch_coverage=1 00:15:31.516 --rc genhtml_function_coverage=1 00:15:31.516 --rc genhtml_legend=1 00:15:31.516 --rc geninfo_all_blocks=1 00:15:31.516 --rc geninfo_unexecuted_blocks=1 00:15:31.516 00:15:31.516 ' 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:31.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.516 --rc genhtml_branch_coverage=1 00:15:31.516 --rc genhtml_function_coverage=1 00:15:31.516 --rc genhtml_legend=1 00:15:31.516 --rc geninfo_all_blocks=1 00:15:31.516 --rc geninfo_unexecuted_blocks=1 00:15:31.516 00:15:31.516 ' 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.516 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.517 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:31.517 16:28:26 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.778 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:36.779 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:36.779 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:36.779 Found net devices under 0000:18:00.0: mlx_0_0 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:36.779 Found net devices under 0000:18:00.1: mlx_0_1 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:36.779 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:37.038 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:37.038 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:37.038 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.038 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:37.038 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:37.039 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:37.039 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:37.039 altname enp24s0f0np0 00:15:37.039 altname ens785f0np0 00:15:37.039 inet 192.168.100.8/24 scope global mlx_0_0 00:15:37.039 valid_lft forever preferred_lft forever 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:37.039 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:37.039 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:37.039 altname enp24s0f1np1 00:15:37.039 altname ens785f1np1 00:15:37.039 inet 192.168.100.9/24 scope global mlx_0_1 00:15:37.039 valid_lft forever preferred_lft forever 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:37.039 192.168.100.9' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:37.039 192.168.100.9' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:37.039 192.168.100.9' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3799704 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3799704 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3799704 ']' 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.039 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:37.039 [2024-12-06 16:28:31.689245] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:37.039 [2024-12-06 16:28:31.689293] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.039 [2024-12-06 16:28:31.746193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.298 [2024-12-06 16:28:31.785229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.298 [2024-12-06 16:28:31.785260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.298 [2024-12-06 16:28:31.785267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.298 [2024-12-06 16:28:31.785272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.298 [2024-12-06 16:28:31.785277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.298 [2024-12-06 16:28:31.785740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3799729 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=fb973426-c882-4ccc-aa35-f5271248e8d0 00:15:37.298 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:15:37.299 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4cd61aad-17db-4435-b244-d16377c2cf51 00:15:37.299 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:15:37.299 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=8c1097d8-a17f-488f-a9f9-0adc12de5cd7 00:15:37.299 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:15:37.299 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.299 16:28:31 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:37.299 null0 00:15:37.299 null1 00:15:37.299 [2024-12-06 16:28:31.960367] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:37.299 [2024-12-06 16:28:31.960420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3799729 ] 00:15:37.299 null2 00:15:37.299 [2024-12-06 16:28:31.987537] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc2f8d0/0xc400b0) succeed. 00:15:37.299 [2024-12-06 16:28:31.996482] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc30d80/0xcc0140) succeed. 00:15:37.299 [2024-12-06 16:28:32.018835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.557 [2024-12-06 16:28:32.044538] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:37.557 [2024-12-06 16:28:32.057895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3799729 /var/tmp/tgt2.sock 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3799729 ']' 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:15:37.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:37.557 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:15:38.122 [2024-12-06 16:28:32.593423] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16e88b0/0x16638c0) succeed. 00:15:38.122 [2024-12-06 16:28:32.602301] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1837d90/0x16a4f60) succeed. 00:15:38.122 [2024-12-06 16:28:32.643381] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:15:38.122 nvme0n1 nvme0n2 00:15:38.122 nvme1n1 00:15:38.122 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:15:38.122 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:15:38.122 16:28:32 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid fb973426-c882-4ccc-aa35-f5271248e8d0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fb973426c8824cccaa35f5271248e8d0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FB973426C8824CCCAA35F5271248E8D0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ FB973426C8824CCCAA35F5271248E8D0 == \F\B\9\7\3\4\2\6\C\8\8\2\4\C\C\C\A\A\3\5\F\5\2\7\1\2\4\8\E\8\D\0 ]] 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4cd61aad-17db-4435-b244-d16377c2cf51 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4cd61aad17db4435b244d16377c2cf51 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4CD61AAD17DB4435B244D16377C2CF51 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4CD61AAD17DB4435B244D16377C2CF51 == \4\C\D\6\1\A\A\D\1\7\D\B\4\4\3\5\B\2\4\4\D\1\6\3\7\7\C\2\C\F\5\1 ]] 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 8c1097d8-a17f-488f-a9f9-0adc12de5cd7 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8c1097d8a17f488fa9f90adc12de5cd7 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8C1097D8A17F488FA9F90ADC12DE5CD7 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 8C1097D8A17F488FA9F90ADC12DE5CD7 == \8\C\1\0\9\7\D\8\A\1\7\F\4\8\8\F\A\9\F\9\0\A\D\C\1\2\D\E\5\C\D\7 ]] 00:15:46.229 16:28:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3799729 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3799729 ']' 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3799729 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3799729 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3799729' 00:15:52.782 killing process with pid 3799729 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3799729 00:15:52.782 16:28:46 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3799729 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:52.782 rmmod nvme_rdma 00:15:52.782 rmmod nvme_fabrics 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3799704 ']' 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3799704 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3799704 ']' 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3799704 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3799704 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3799704' 00:15:52.782 killing process with pid 3799704 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3799704 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3799704 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:52.782 00:15:52.782 real 0m21.517s 00:15:52.782 user 0m32.395s 00:15:52.782 sys 0m5.209s 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:52.782 ************************************ 00:15:52.782 END TEST nvmf_nsid 00:15:52.782 ************************************ 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:52.782 00:15:52.782 real 7m10.016s 00:15:52.782 user 17m21.830s 00:15:52.782 sys 1m52.373s 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.782 16:28:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.782 ************************************ 00:15:52.782 END TEST nvmf_target_extra 00:15:52.782 ************************************ 00:15:52.782 16:28:47 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:15:52.782 16:28:47 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:52.782 16:28:47 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.782 16:28:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:53.041 ************************************ 00:15:53.041 START TEST nvmf_host 00:15:53.041 ************************************ 00:15:53.041 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:15:53.042 * Looking for test storage... 00:15:53.042 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:53.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.042 --rc genhtml_branch_coverage=1 00:15:53.042 --rc genhtml_function_coverage=1 00:15:53.042 --rc genhtml_legend=1 00:15:53.042 --rc geninfo_all_blocks=1 00:15:53.042 --rc geninfo_unexecuted_blocks=1 00:15:53.042 00:15:53.042 ' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:53.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.042 --rc genhtml_branch_coverage=1 00:15:53.042 --rc genhtml_function_coverage=1 00:15:53.042 --rc genhtml_legend=1 00:15:53.042 --rc geninfo_all_blocks=1 00:15:53.042 --rc geninfo_unexecuted_blocks=1 00:15:53.042 00:15:53.042 ' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:53.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.042 --rc genhtml_branch_coverage=1 00:15:53.042 --rc genhtml_function_coverage=1 00:15:53.042 --rc genhtml_legend=1 00:15:53.042 --rc geninfo_all_blocks=1 00:15:53.042 --rc geninfo_unexecuted_blocks=1 00:15:53.042 00:15:53.042 ' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:53.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.042 --rc genhtml_branch_coverage=1 00:15:53.042 --rc genhtml_function_coverage=1 00:15:53.042 --rc genhtml_legend=1 00:15:53.042 --rc geninfo_all_blocks=1 00:15:53.042 --rc geninfo_unexecuted_blocks=1 00:15:53.042 00:15:53.042 ' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.042 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.042 ************************************ 00:15:53.042 START TEST nvmf_multicontroller 00:15:53.042 ************************************ 00:15:53.042 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:15:53.301 * Looking for test storage... 00:15:53.301 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:53.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.301 --rc genhtml_branch_coverage=1 00:15:53.301 --rc genhtml_function_coverage=1 00:15:53.301 --rc genhtml_legend=1 00:15:53.301 --rc geninfo_all_blocks=1 00:15:53.301 --rc geninfo_unexecuted_blocks=1 00:15:53.301 00:15:53.301 ' 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:53.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.301 --rc genhtml_branch_coverage=1 00:15:53.301 --rc genhtml_function_coverage=1 00:15:53.301 --rc genhtml_legend=1 00:15:53.301 --rc geninfo_all_blocks=1 00:15:53.301 --rc geninfo_unexecuted_blocks=1 00:15:53.301 00:15:53.301 ' 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:53.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.301 --rc genhtml_branch_coverage=1 00:15:53.301 --rc genhtml_function_coverage=1 00:15:53.301 --rc genhtml_legend=1 00:15:53.301 --rc geninfo_all_blocks=1 00:15:53.301 --rc geninfo_unexecuted_blocks=1 00:15:53.301 00:15:53.301 ' 00:15:53.301 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:53.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.301 --rc genhtml_branch_coverage=1 00:15:53.301 --rc genhtml_function_coverage=1 00:15:53.301 --rc genhtml_legend=1 00:15:53.301 --rc geninfo_all_blocks=1 00:15:53.301 --rc geninfo_unexecuted_blocks=1 00:15:53.302 00:15:53.302 ' 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.302 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:15:53.302 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:15:53.302 00:15:53.302 real 0m0.180s 00:15:53.302 user 0m0.108s 00:15:53.302 sys 0m0.085s 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:15:53.302 ************************************ 00:15:53.302 END TEST nvmf_multicontroller 00:15:53.302 ************************************ 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.302 16:28:47 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:53.302 ************************************ 00:15:53.302 START TEST nvmf_aer 00:15:53.302 ************************************ 00:15:53.302 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:15:53.562 * Looking for test storage... 00:15:53.562 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:53.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.562 --rc genhtml_branch_coverage=1 00:15:53.562 --rc genhtml_function_coverage=1 00:15:53.562 --rc genhtml_legend=1 00:15:53.562 --rc geninfo_all_blocks=1 00:15:53.562 --rc geninfo_unexecuted_blocks=1 00:15:53.562 00:15:53.562 ' 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:53.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.562 --rc genhtml_branch_coverage=1 00:15:53.562 --rc genhtml_function_coverage=1 00:15:53.562 --rc genhtml_legend=1 00:15:53.562 --rc geninfo_all_blocks=1 00:15:53.562 --rc geninfo_unexecuted_blocks=1 00:15:53.562 00:15:53.562 ' 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:53.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.562 --rc genhtml_branch_coverage=1 00:15:53.562 --rc genhtml_function_coverage=1 00:15:53.562 --rc genhtml_legend=1 00:15:53.562 --rc geninfo_all_blocks=1 00:15:53.562 --rc geninfo_unexecuted_blocks=1 00:15:53.562 00:15:53.562 ' 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:53.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.562 --rc genhtml_branch_coverage=1 00:15:53.562 --rc genhtml_function_coverage=1 00:15:53.562 --rc genhtml_legend=1 00:15:53.562 --rc geninfo_all_blocks=1 00:15:53.562 --rc geninfo_unexecuted_blocks=1 00:15:53.562 00:15:53.562 ' 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.562 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:15:53.563 16:28:48 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:58.827 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:15:58.827 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:58.827 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:58.827 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:58.827 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:58.827 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:58.827 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:58.828 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:58.828 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:58.828 Found net devices under 0000:18:00.0: mlx_0_0 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:58.828 Found net devices under 0000:18:00.1: mlx_0_1 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:58.828 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:58.828 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:58.828 altname enp24s0f0np0 00:15:58.828 altname ens785f0np0 00:15:58.828 inet 192.168.100.8/24 scope global mlx_0_0 00:15:58.828 valid_lft forever preferred_lft forever 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:58.828 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:58.828 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:58.828 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:58.829 altname enp24s0f1np1 00:15:58.829 altname ens785f1np1 00:15:58.829 inet 192.168.100.9/24 scope global mlx_0_1 00:15:58.829 valid_lft forever preferred_lft forever 00:15:58.829 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:15:58.829 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:58.829 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:58.829 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:58.829 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:58.829 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:58.829 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:58.829 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:58.829 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:58.829 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:59.086 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:59.086 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:59.086 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.086 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:59.086 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:59.086 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:15:59.086 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:59.086 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.086 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:59.086 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:59.087 192.168.100.9' 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:59.087 192.168.100.9' 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:59.087 192.168.100.9' 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3806037 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3806037 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3806037 ']' 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.087 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.087 [2024-12-06 16:28:53.687507] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:15:59.087 [2024-12-06 16:28:53.687558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.087 [2024-12-06 16:28:53.746654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:59.087 [2024-12-06 16:28:53.788469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.087 [2024-12-06 16:28:53.788504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.087 [2024-12-06 16:28:53.788511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.087 [2024-12-06 16:28:53.788516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.087 [2024-12-06 16:28:53.788521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.087 [2024-12-06 16:28:53.789922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.087 [2024-12-06 16:28:53.790014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.087 [2024-12-06 16:28:53.790077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:59.087 [2024-12-06 16:28:53.790079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.345 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.345 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:15:59.345 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:59.345 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:59.345 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.345 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.345 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:59.345 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.345 16:28:53 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.345 [2024-12-06 16:28:53.951605] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfbd0c0/0xfc15b0) succeed. 00:15:59.345 [2024-12-06 16:28:53.959762] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfbe750/0x1002c50) succeed. 00:15:59.602 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.602 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:15:59.602 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.602 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.602 Malloc0 00:15:59.602 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.602 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:15:59.602 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.602 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.602 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.603 [2024-12-06 16:28:54.132137] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.603 [ 00:15:59.603 { 00:15:59.603 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:59.603 "subtype": "Discovery", 00:15:59.603 "listen_addresses": [], 00:15:59.603 "allow_any_host": true, 00:15:59.603 "hosts": [] 00:15:59.603 }, 00:15:59.603 { 00:15:59.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.603 "subtype": "NVMe", 00:15:59.603 "listen_addresses": [ 00:15:59.603 { 00:15:59.603 "trtype": "RDMA", 00:15:59.603 "adrfam": "IPv4", 00:15:59.603 "traddr": "192.168.100.8", 00:15:59.603 "trsvcid": "4420" 00:15:59.603 } 00:15:59.603 ], 00:15:59.603 "allow_any_host": true, 00:15:59.603 "hosts": [], 00:15:59.603 "serial_number": "SPDK00000000000001", 00:15:59.603 "model_number": "SPDK bdev Controller", 00:15:59.603 "max_namespaces": 2, 00:15:59.603 "min_cntlid": 1, 00:15:59.603 "max_cntlid": 65519, 00:15:59.603 "namespaces": [ 00:15:59.603 { 00:15:59.603 "nsid": 1, 00:15:59.603 "bdev_name": "Malloc0", 00:15:59.603 "name": "Malloc0", 00:15:59.603 "nguid": "AB99E128CD774096A7E7A4FC9E8A044E", 00:15:59.603 "uuid": "ab99e128-cd77-4096-a7e7-a4fc9e8a044e" 00:15:59.603 } 00:15:59.603 ] 00:15:59.603 } 00:15:59.603 ] 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3806067 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:15:59.603 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:15:59.862 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:59.862 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:59.862 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:15:59.862 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:15:59.862 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.862 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.862 Malloc1 00:15:59.862 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.863 [ 00:15:59.863 { 00:15:59.863 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:59.863 "subtype": "Discovery", 00:15:59.863 "listen_addresses": [], 00:15:59.863 "allow_any_host": true, 00:15:59.863 "hosts": [] 00:15:59.863 }, 00:15:59.863 { 00:15:59.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.863 "subtype": "NVMe", 00:15:59.863 "listen_addresses": [ 00:15:59.863 { 00:15:59.863 "trtype": "RDMA", 00:15:59.863 "adrfam": "IPv4", 00:15:59.863 "traddr": "192.168.100.8", 00:15:59.863 "trsvcid": "4420" 00:15:59.863 } 00:15:59.863 ], 00:15:59.863 "allow_any_host": true, 00:15:59.863 "hosts": [], 00:15:59.863 "serial_number": "SPDK00000000000001", 00:15:59.863 "model_number": "SPDK bdev Controller", 00:15:59.863 "max_namespaces": 2, 00:15:59.863 "min_cntlid": 1, 00:15:59.863 "max_cntlid": 65519, 00:15:59.863 "namespaces": [ 00:15:59.863 { 00:15:59.863 "nsid": 1, 00:15:59.863 "bdev_name": "Malloc0", 00:15:59.863 "name": "Malloc0", 00:15:59.863 "nguid": "AB99E128CD774096A7E7A4FC9E8A044E", 00:15:59.863 "uuid": "ab99e128-cd77-4096-a7e7-a4fc9e8a044e" 00:15:59.863 }, 00:15:59.863 { 00:15:59.863 "nsid": 2, 00:15:59.863 "bdev_name": "Malloc1", 00:15:59.863 "name": "Malloc1", 00:15:59.863 "nguid": "AB0A7812A7EB4DECA0FAB584B50DD5B8", 00:15:59.863 "uuid": "ab0a7812-a7eb-4dec-a0fa-b584b50dd5b8" 00:15:59.863 } 00:15:59.863 ] 00:15:59.863 } 00:15:59.863 ] 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3806067 00:15:59.863 Asynchronous Event Request test 00:15:59.863 Attaching to 192.168.100.8 00:15:59.863 Attached to 192.168.100.8 00:15:59.863 Registering asynchronous event callbacks... 00:15:59.863 Starting namespace attribute notice tests for all controllers... 00:15:59.863 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:59.863 aer_cb - Changed Namespace 00:15:59.863 Cleaning up... 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:59.863 rmmod nvme_rdma 00:15:59.863 rmmod nvme_fabrics 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3806037 ']' 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3806037 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3806037 ']' 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3806037 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.863 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3806037 00:16:00.120 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:00.120 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:00.120 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3806037' 00:16:00.120 killing process with pid 3806037 00:16:00.120 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3806037 00:16:00.120 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3806037 00:16:00.377 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:00.377 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:00.377 00:16:00.377 real 0m6.846s 00:16:00.377 user 0m5.629s 00:16:00.377 sys 0m4.603s 00:16:00.377 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.377 16:28:54 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:00.377 ************************************ 00:16:00.377 END TEST nvmf_aer 00:16:00.377 ************************************ 00:16:00.377 16:28:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:16:00.377 16:28:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:00.377 16:28:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.377 16:28:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:00.377 ************************************ 00:16:00.377 START TEST nvmf_async_init 00:16:00.377 ************************************ 00:16:00.377 16:28:54 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:16:00.377 * Looking for test storage... 00:16:00.377 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.377 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:00.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.377 --rc genhtml_branch_coverage=1 00:16:00.378 --rc genhtml_function_coverage=1 00:16:00.378 --rc genhtml_legend=1 00:16:00.378 --rc geninfo_all_blocks=1 00:16:00.378 --rc geninfo_unexecuted_blocks=1 00:16:00.378 00:16:00.378 ' 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:00.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.378 --rc genhtml_branch_coverage=1 00:16:00.378 --rc genhtml_function_coverage=1 00:16:00.378 --rc genhtml_legend=1 00:16:00.378 --rc geninfo_all_blocks=1 00:16:00.378 --rc geninfo_unexecuted_blocks=1 00:16:00.378 00:16:00.378 ' 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:00.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.378 --rc genhtml_branch_coverage=1 00:16:00.378 --rc genhtml_function_coverage=1 00:16:00.378 --rc genhtml_legend=1 00:16:00.378 --rc geninfo_all_blocks=1 00:16:00.378 --rc geninfo_unexecuted_blocks=1 00:16:00.378 00:16:00.378 ' 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:00.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.378 --rc genhtml_branch_coverage=1 00:16:00.378 --rc genhtml_function_coverage=1 00:16:00.378 --rc genhtml_legend=1 00:16:00.378 --rc geninfo_all_blocks=1 00:16:00.378 --rc geninfo_unexecuted_blocks=1 00:16:00.378 00:16:00.378 ' 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:00.378 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:16:00.635 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.635 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.635 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.635 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.635 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:00.636 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6ea6e099094143ecbd2e266930897087 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:16:00.636 16:28:55 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:07.193 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:07.193 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:07.193 Found net devices under 0000:18:00.0: mlx_0_0 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:07.193 Found net devices under 0000:18:00.1: mlx_0_1 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:07.193 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:07.194 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:07.194 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:07.194 altname enp24s0f0np0 00:16:07.194 altname ens785f0np0 00:16:07.194 inet 192.168.100.8/24 scope global mlx_0_0 00:16:07.194 valid_lft forever preferred_lft forever 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:07.194 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:07.194 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:07.194 altname enp24s0f1np1 00:16:07.194 altname ens785f1np1 00:16:07.194 inet 192.168.100.9/24 scope global mlx_0_1 00:16:07.194 valid_lft forever preferred_lft forever 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:07.194 192.168.100.9' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:07.194 192.168.100.9' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:07.194 192.168.100.9' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3809578 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3809578 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3809578 ']' 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.194 16:29:00 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.194 [2024-12-06 16:29:00.926865] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:16:07.194 [2024-12-06 16:29:00.926910] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.194 [2024-12-06 16:29:00.983493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.195 [2024-12-06 16:29:01.022031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.195 [2024-12-06 16:29:01.022063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.195 [2024-12-06 16:29:01.022070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.195 [2024-12-06 16:29:01.022075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.195 [2024-12-06 16:29:01.022082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.195 [2024-12-06 16:29:01.022553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 [2024-12-06 16:29:01.171078] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x140edc0/0x14132b0) succeed. 00:16:07.195 [2024-12-06 16:29:01.178760] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1410270/0x1454950) succeed. 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 null0 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6ea6e099094143ecbd2e266930897087 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 [2024-12-06 16:29:01.247210] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 nvme0n1 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 [ 00:16:07.195 { 00:16:07.195 "name": "nvme0n1", 00:16:07.195 "aliases": [ 00:16:07.195 "6ea6e099-0941-43ec-bd2e-266930897087" 00:16:07.195 ], 00:16:07.195 "product_name": "NVMe disk", 00:16:07.195 "block_size": 512, 00:16:07.195 "num_blocks": 2097152, 00:16:07.195 "uuid": "6ea6e099-0941-43ec-bd2e-266930897087", 00:16:07.195 "numa_id": 0, 00:16:07.195 "assigned_rate_limits": { 00:16:07.195 "rw_ios_per_sec": 0, 00:16:07.195 "rw_mbytes_per_sec": 0, 00:16:07.195 "r_mbytes_per_sec": 0, 00:16:07.195 "w_mbytes_per_sec": 0 00:16:07.195 }, 00:16:07.195 "claimed": false, 00:16:07.195 "zoned": false, 00:16:07.195 "supported_io_types": { 00:16:07.195 "read": true, 00:16:07.195 "write": true, 00:16:07.195 "unmap": false, 00:16:07.195 "flush": true, 00:16:07.195 "reset": true, 00:16:07.195 "nvme_admin": true, 00:16:07.195 "nvme_io": true, 00:16:07.195 "nvme_io_md": false, 00:16:07.195 "write_zeroes": true, 00:16:07.195 "zcopy": false, 00:16:07.195 "get_zone_info": false, 00:16:07.195 "zone_management": false, 00:16:07.195 "zone_append": false, 00:16:07.195 "compare": true, 00:16:07.195 "compare_and_write": true, 00:16:07.195 "abort": true, 00:16:07.195 "seek_hole": false, 00:16:07.195 "seek_data": false, 00:16:07.195 "copy": true, 00:16:07.195 "nvme_iov_md": false 00:16:07.195 }, 00:16:07.195 "memory_domains": [ 00:16:07.195 { 00:16:07.195 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:07.195 "dma_device_type": 0 00:16:07.195 } 00:16:07.195 ], 00:16:07.195 "driver_specific": { 00:16:07.195 "nvme": [ 00:16:07.195 { 00:16:07.195 "trid": { 00:16:07.195 "trtype": "RDMA", 00:16:07.195 "adrfam": "IPv4", 00:16:07.195 "traddr": "192.168.100.8", 00:16:07.195 "trsvcid": "4420", 00:16:07.195 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:07.195 }, 00:16:07.195 "ctrlr_data": { 00:16:07.195 "cntlid": 1, 00:16:07.195 "vendor_id": "0x8086", 00:16:07.195 "model_number": "SPDK bdev Controller", 00:16:07.195 "serial_number": "00000000000000000000", 00:16:07.195 "firmware_revision": "25.01", 00:16:07.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:07.195 "oacs": { 00:16:07.195 "security": 0, 00:16:07.195 "format": 0, 00:16:07.195 "firmware": 0, 00:16:07.195 "ns_manage": 0 00:16:07.195 }, 00:16:07.195 "multi_ctrlr": true, 00:16:07.195 "ana_reporting": false 00:16:07.195 }, 00:16:07.195 "vs": { 00:16:07.195 "nvme_version": "1.3" 00:16:07.195 }, 00:16:07.195 "ns_data": { 00:16:07.195 "id": 1, 00:16:07.195 "can_share": true 00:16:07.195 } 00:16:07.195 } 00:16:07.195 ], 00:16:07.195 "mp_policy": "active_passive" 00:16:07.195 } 00:16:07.195 } 00:16:07.195 ] 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 [2024-12-06 16:29:01.348200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:07.195 [2024-12-06 16:29:01.362489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:07.195 [2024-12-06 16:29:01.384547] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.195 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.195 [ 00:16:07.195 { 00:16:07.195 "name": "nvme0n1", 00:16:07.195 "aliases": [ 00:16:07.195 "6ea6e099-0941-43ec-bd2e-266930897087" 00:16:07.195 ], 00:16:07.195 "product_name": "NVMe disk", 00:16:07.195 "block_size": 512, 00:16:07.195 "num_blocks": 2097152, 00:16:07.195 "uuid": "6ea6e099-0941-43ec-bd2e-266930897087", 00:16:07.195 "numa_id": 0, 00:16:07.195 "assigned_rate_limits": { 00:16:07.195 "rw_ios_per_sec": 0, 00:16:07.195 "rw_mbytes_per_sec": 0, 00:16:07.195 "r_mbytes_per_sec": 0, 00:16:07.195 "w_mbytes_per_sec": 0 00:16:07.195 }, 00:16:07.195 "claimed": false, 00:16:07.195 "zoned": false, 00:16:07.195 "supported_io_types": { 00:16:07.195 "read": true, 00:16:07.195 "write": true, 00:16:07.195 "unmap": false, 00:16:07.195 "flush": true, 00:16:07.195 "reset": true, 00:16:07.195 "nvme_admin": true, 00:16:07.195 "nvme_io": true, 00:16:07.195 "nvme_io_md": false, 00:16:07.195 "write_zeroes": true, 00:16:07.195 "zcopy": false, 00:16:07.195 "get_zone_info": false, 00:16:07.195 "zone_management": false, 00:16:07.195 "zone_append": false, 00:16:07.195 "compare": true, 00:16:07.195 "compare_and_write": true, 00:16:07.195 "abort": true, 00:16:07.195 "seek_hole": false, 00:16:07.195 "seek_data": false, 00:16:07.196 "copy": true, 00:16:07.196 "nvme_iov_md": false 00:16:07.196 }, 00:16:07.196 "memory_domains": [ 00:16:07.196 { 00:16:07.196 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:07.196 "dma_device_type": 0 00:16:07.196 } 00:16:07.196 ], 00:16:07.196 "driver_specific": { 00:16:07.196 "nvme": [ 00:16:07.196 { 00:16:07.196 "trid": { 00:16:07.196 "trtype": "RDMA", 00:16:07.196 "adrfam": "IPv4", 00:16:07.196 "traddr": "192.168.100.8", 00:16:07.196 "trsvcid": "4420", 00:16:07.196 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:07.196 }, 00:16:07.196 "ctrlr_data": { 00:16:07.196 "cntlid": 2, 00:16:07.196 "vendor_id": "0x8086", 00:16:07.196 "model_number": "SPDK bdev Controller", 00:16:07.196 "serial_number": "00000000000000000000", 00:16:07.196 "firmware_revision": "25.01", 00:16:07.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:07.196 "oacs": { 00:16:07.196 "security": 0, 00:16:07.196 "format": 0, 00:16:07.196 "firmware": 0, 00:16:07.196 "ns_manage": 0 00:16:07.196 }, 00:16:07.196 "multi_ctrlr": true, 00:16:07.196 "ana_reporting": false 00:16:07.196 }, 00:16:07.196 "vs": { 00:16:07.196 "nvme_version": "1.3" 00:16:07.196 }, 00:16:07.196 "ns_data": { 00:16:07.196 "id": 1, 00:16:07.196 "can_share": true 00:16:07.196 } 00:16:07.196 } 00:16:07.196 ], 00:16:07.196 "mp_policy": "active_passive" 00:16:07.196 } 00:16:07.196 } 00:16:07.196 ] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.x6GSlgxjUK 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.x6GSlgxjUK 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.x6GSlgxjUK 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.196 [2024-12-06 16:29:01.470008] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.196 [2024-12-06 16:29:01.486050] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:07.196 nvme0n1 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.196 [ 00:16:07.196 { 00:16:07.196 "name": "nvme0n1", 00:16:07.196 "aliases": [ 00:16:07.196 "6ea6e099-0941-43ec-bd2e-266930897087" 00:16:07.196 ], 00:16:07.196 "product_name": "NVMe disk", 00:16:07.196 "block_size": 512, 00:16:07.196 "num_blocks": 2097152, 00:16:07.196 "uuid": "6ea6e099-0941-43ec-bd2e-266930897087", 00:16:07.196 "numa_id": 0, 00:16:07.196 "assigned_rate_limits": { 00:16:07.196 "rw_ios_per_sec": 0, 00:16:07.196 "rw_mbytes_per_sec": 0, 00:16:07.196 "r_mbytes_per_sec": 0, 00:16:07.196 "w_mbytes_per_sec": 0 00:16:07.196 }, 00:16:07.196 "claimed": false, 00:16:07.196 "zoned": false, 00:16:07.196 "supported_io_types": { 00:16:07.196 "read": true, 00:16:07.196 "write": true, 00:16:07.196 "unmap": false, 00:16:07.196 "flush": true, 00:16:07.196 "reset": true, 00:16:07.196 "nvme_admin": true, 00:16:07.196 "nvme_io": true, 00:16:07.196 "nvme_io_md": false, 00:16:07.196 "write_zeroes": true, 00:16:07.196 "zcopy": false, 00:16:07.196 "get_zone_info": false, 00:16:07.196 "zone_management": false, 00:16:07.196 "zone_append": false, 00:16:07.196 "compare": true, 00:16:07.196 "compare_and_write": true, 00:16:07.196 "abort": true, 00:16:07.196 "seek_hole": false, 00:16:07.196 "seek_data": false, 00:16:07.196 "copy": true, 00:16:07.196 "nvme_iov_md": false 00:16:07.196 }, 00:16:07.196 "memory_domains": [ 00:16:07.196 { 00:16:07.196 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:07.196 "dma_device_type": 0 00:16:07.196 } 00:16:07.196 ], 00:16:07.196 "driver_specific": { 00:16:07.196 "nvme": [ 00:16:07.196 { 00:16:07.196 "trid": { 00:16:07.196 "trtype": "RDMA", 00:16:07.196 "adrfam": "IPv4", 00:16:07.196 "traddr": "192.168.100.8", 00:16:07.196 "trsvcid": "4421", 00:16:07.196 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:07.196 }, 00:16:07.196 "ctrlr_data": { 00:16:07.196 "cntlid": 3, 00:16:07.196 "vendor_id": "0x8086", 00:16:07.196 "model_number": "SPDK bdev Controller", 00:16:07.196 "serial_number": "00000000000000000000", 00:16:07.196 "firmware_revision": "25.01", 00:16:07.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:07.196 "oacs": { 00:16:07.196 "security": 0, 00:16:07.196 "format": 0, 00:16:07.196 "firmware": 0, 00:16:07.196 "ns_manage": 0 00:16:07.196 }, 00:16:07.196 "multi_ctrlr": true, 00:16:07.196 "ana_reporting": false 00:16:07.196 }, 00:16:07.196 "vs": { 00:16:07.196 "nvme_version": "1.3" 00:16:07.196 }, 00:16:07.196 "ns_data": { 00:16:07.196 "id": 1, 00:16:07.196 "can_share": true 00:16:07.196 } 00:16:07.196 } 00:16:07.196 ], 00:16:07.196 "mp_policy": "active_passive" 00:16:07.196 } 00:16:07.196 } 00:16:07.196 ] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.x6GSlgxjUK 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:07.196 rmmod nvme_rdma 00:16:07.196 rmmod nvme_fabrics 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3809578 ']' 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3809578 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3809578 ']' 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3809578 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:16:07.196 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3809578 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3809578' 00:16:07.197 killing process with pid 3809578 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3809578 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3809578 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:07.197 00:16:07.197 real 0m6.946s 00:16:07.197 user 0m2.727s 00:16:07.197 sys 0m4.698s 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:07.197 ************************************ 00:16:07.197 END TEST nvmf_async_init 00:16:07.197 ************************************ 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.197 16:29:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.456 ************************************ 00:16:07.456 START TEST dma 00:16:07.456 ************************************ 00:16:07.456 16:29:01 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:16:07.456 * Looking for test storage... 00:16:07.456 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.456 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:07.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.456 --rc genhtml_branch_coverage=1 00:16:07.456 --rc genhtml_function_coverage=1 00:16:07.456 --rc genhtml_legend=1 00:16:07.456 --rc geninfo_all_blocks=1 00:16:07.456 --rc geninfo_unexecuted_blocks=1 00:16:07.457 00:16:07.457 ' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.457 --rc genhtml_branch_coverage=1 00:16:07.457 --rc genhtml_function_coverage=1 00:16:07.457 --rc genhtml_legend=1 00:16:07.457 --rc geninfo_all_blocks=1 00:16:07.457 --rc geninfo_unexecuted_blocks=1 00:16:07.457 00:16:07.457 ' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.457 --rc genhtml_branch_coverage=1 00:16:07.457 --rc genhtml_function_coverage=1 00:16:07.457 --rc genhtml_legend=1 00:16:07.457 --rc geninfo_all_blocks=1 00:16:07.457 --rc geninfo_unexecuted_blocks=1 00:16:07.457 00:16:07.457 ' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.457 --rc genhtml_branch_coverage=1 00:16:07.457 --rc genhtml_function_coverage=1 00:16:07.457 --rc genhtml_legend=1 00:16:07.457 --rc geninfo_all_blocks=1 00:16:07.457 --rc geninfo_unexecuted_blocks=1 00:16:07.457 00:16:07.457 ' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:07.457 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:16:07.457 16:29:02 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:14.022 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:14.022 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:16:14.022 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:14.022 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:14.022 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:14.022 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:14.022 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:14.022 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:16:14.022 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:14.023 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:14.023 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:14.023 Found net devices under 0000:18:00.0: mlx_0_0 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:14.023 Found net devices under 0000:18:00.1: mlx_0_1 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:14.023 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:14.023 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:14.023 altname enp24s0f0np0 00:16:14.023 altname ens785f0np0 00:16:14.023 inet 192.168.100.8/24 scope global mlx_0_0 00:16:14.023 valid_lft forever preferred_lft forever 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:14.023 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:14.023 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:14.023 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:14.023 altname enp24s0f1np1 00:16:14.023 altname ens785f1np1 00:16:14.023 inet 192.168.100.9/24 scope global mlx_0_1 00:16:14.024 valid_lft forever preferred_lft forever 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:14.024 192.168.100.9' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:14.024 192.168.100.9' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:14.024 192.168.100.9' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=3812988 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 3812988 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 3812988 ']' 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.024 16:29:07 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 [2024-12-06 16:29:07.837575] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:16:14.024 [2024-12-06 16:29:07.837619] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.024 [2024-12-06 16:29:07.894883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:14.024 [2024-12-06 16:29:07.934776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.024 [2024-12-06 16:29:07.934812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.024 [2024-12-06 16:29:07.934818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.024 [2024-12-06 16:29:07.934824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.024 [2024-12-06 16:29:07.934829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.024 [2024-12-06 16:29:07.935950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.024 [2024-12-06 16:29:07.935953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 [2024-12-06 16:29:08.089194] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13d4940/0x13d8e30) succeed. 00:16:14.024 [2024-12-06 16:29:08.097161] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13d5e90/0x141a4d0) succeed. 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 Malloc0 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 [2024-12-06 16:29:08.248124] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:14.024 { 00:16:14.024 "params": { 00:16:14.024 "name": "Nvme$subsystem", 00:16:14.024 "trtype": "$TEST_TRANSPORT", 00:16:14.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.024 "adrfam": "ipv4", 00:16:14.024 "trsvcid": "$NVMF_PORT", 00:16:14.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.024 "hdgst": ${hdgst:-false}, 00:16:14.024 "ddgst": ${ddgst:-false} 00:16:14.024 }, 00:16:14.024 "method": "bdev_nvme_attach_controller" 00:16:14.024 } 00:16:14.024 EOF 00:16:14.024 )") 00:16:14.024 16:29:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:16:14.025 16:29:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:16:14.025 16:29:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:16:14.025 16:29:08 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:14.025 "params": { 00:16:14.025 "name": "Nvme0", 00:16:14.025 "trtype": "rdma", 00:16:14.025 "traddr": "192.168.100.8", 00:16:14.025 "adrfam": "ipv4", 00:16:14.025 "trsvcid": "4420", 00:16:14.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:14.025 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:14.025 "hdgst": false, 00:16:14.025 "ddgst": false 00:16:14.025 }, 00:16:14.025 "method": "bdev_nvme_attach_controller" 00:16:14.025 }' 00:16:14.025 [2024-12-06 16:29:08.295641] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:16:14.025 [2024-12-06 16:29:08.295688] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813128 ] 00:16:14.025 [2024-12-06 16:29:08.350066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:14.025 [2024-12-06 16:29:08.389127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.025 [2024-12-06 16:29:08.389131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.284 bdev Nvme0n1 reports 1 memory domains 00:16:19.284 bdev Nvme0n1 supports RDMA memory domain 00:16:19.284 Initialization complete, running randrw IO for 5 sec on 2 cores 00:16:19.284 ========================================================================== 00:16:19.284 Latency [us] 00:16:19.284 IOPS MiB/s Average min max 00:16:19.284 Core 2: 22460.86 87.74 711.74 235.35 8283.62 00:16:19.284 Core 3: 22650.82 88.48 705.76 232.89 8389.64 00:16:19.284 ========================================================================== 00:16:19.284 Total : 45111.68 176.22 708.73 232.89 8389.64 00:16:19.284 00:16:19.284 Total operations: 225600, translate 225600 pull_push 0 memzero 0 00:16:19.284 16:29:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:16:19.284 16:29:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:16:19.284 16:29:13 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:16:19.284 [2024-12-06 16:29:13.800561] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:16:19.284 [2024-12-06 16:29:13.800610] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814093 ] 00:16:19.284 [2024-12-06 16:29:13.854861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:19.284 [2024-12-06 16:29:13.893621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.284 [2024-12-06 16:29:13.893635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.550 bdev Malloc0 reports 2 memory domains 00:16:24.550 bdev Malloc0 doesn't support RDMA memory domain 00:16:24.550 Initialization complete, running randrw IO for 5 sec on 2 cores 00:16:24.550 ========================================================================== 00:16:24.550 Latency [us] 00:16:24.550 IOPS MiB/s Average min max 00:16:24.550 Core 2: 14674.44 57.32 1089.68 353.49 1435.77 00:16:24.550 Core 3: 14892.76 58.17 1073.68 409.09 1750.20 00:16:24.550 ========================================================================== 00:16:24.550 Total : 29567.21 115.50 1081.62 353.49 1750.20 00:16:24.550 00:16:24.550 Total operations: 147892, translate 0 pull_push 591568 memzero 0 00:16:24.550 16:29:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:16:24.550 16:29:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:16:24.550 16:29:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:16:24.550 16:29:19 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:16:24.550 Ignoring -M option 00:16:24.550 [2024-12-06 16:29:19.209034] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:16:24.550 [2024-12-06 16:29:19.209082] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814977 ] 00:16:24.550 [2024-12-06 16:29:19.263709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:24.808 [2024-12-06 16:29:19.302445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.809 [2024-12-06 16:29:19.302448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.078 bdev 8525562e-9f77-4ab7-bd89-8319fc0e05a9 reports 1 memory domains 00:16:30.078 bdev 8525562e-9f77-4ab7-bd89-8319fc0e05a9 supports RDMA memory domain 00:16:30.078 Initialization complete, running randread IO for 5 sec on 2 cores 00:16:30.078 ========================================================================== 00:16:30.078 Latency [us] 00:16:30.078 IOPS MiB/s Average min max 00:16:30.078 Core 2: 77529.54 302.85 205.64 67.92 3233.54 00:16:30.078 Core 3: 76761.24 299.85 207.69 61.36 3157.23 00:16:30.078 ========================================================================== 00:16:30.078 Total : 154290.78 602.70 206.66 61.36 3233.54 00:16:30.078 00:16:30.078 Total operations: 771550, translate 0 pull_push 0 memzero 771550 00:16:30.078 16:29:24 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:16:30.336 [2024-12-06 16:29:24.838445] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:32.867 Initializing NVMe Controllers 00:16:32.867 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:16:32.867 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:32.867 Initialization complete. Launching workers. 00:16:32.867 ======================================================== 00:16:32.867 Latency(us) 00:16:32.867 Device Information : IOPS MiB/s Average min max 00:16:32.867 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7996.34 4978.77 14964.80 00:16:32.867 ======================================================== 00:16:32.867 Total : 2016.00 7.88 7996.34 4978.77 14964.80 00:16:32.867 00:16:32.867 16:29:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:16:32.867 16:29:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:16:32.867 16:29:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:16:32.867 16:29:27 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:16:32.867 [2024-12-06 16:29:27.181798] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:16:32.867 [2024-12-06 16:29:27.181846] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3816537 ] 00:16:32.867 [2024-12-06 16:29:27.235472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:32.867 [2024-12-06 16:29:27.274028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.867 [2024-12-06 16:29:27.274032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.246 bdev 5dda4dfc-a6d7-4e85-a38e-5bfb67103a97 reports 1 memory domains 00:16:38.246 bdev 5dda4dfc-a6d7-4e85-a38e-5bfb67103a97 supports RDMA memory domain 00:16:38.246 Initialization complete, running randrw IO for 5 sec on 2 cores 00:16:38.246 ========================================================================== 00:16:38.246 Latency [us] 00:16:38.246 IOPS MiB/s Average min max 00:16:38.246 Core 2: 19490.72 76.14 820.29 46.63 10783.37 00:16:38.246 Core 3: 19991.43 78.09 799.68 21.61 10418.30 00:16:38.246 ========================================================================== 00:16:38.246 Total : 39482.16 154.23 809.85 21.61 10783.37 00:16:38.246 00:16:38.246 Total operations: 197446, translate 197341 pull_push 0 memzero 105 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:38.246 rmmod nvme_rdma 00:16:38.246 rmmod nvme_fabrics 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 3812988 ']' 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 3812988 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 3812988 ']' 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 3812988 00:16:38.246 16:29:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:16:38.247 16:29:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.247 16:29:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3812988 00:16:38.247 16:29:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.247 16:29:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.247 16:29:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3812988' 00:16:38.247 killing process with pid 3812988 00:16:38.247 16:29:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 3812988 00:16:38.247 16:29:32 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 3812988 00:16:38.506 16:29:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.506 16:29:33 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:38.506 00:16:38.506 real 0m31.147s 00:16:38.507 user 1m34.426s 00:16:38.507 sys 0m5.366s 00:16:38.507 16:29:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.507 16:29:33 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:38.507 ************************************ 00:16:38.507 END TEST dma 00:16:38.507 ************************************ 00:16:38.507 16:29:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:16:38.507 16:29:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:38.507 16:29:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.507 16:29:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.507 ************************************ 00:16:38.507 START TEST nvmf_identify 00:16:38.507 ************************************ 00:16:38.507 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:16:38.507 * Looking for test storage... 00:16:38.507 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:38.507 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.767 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:38.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.768 --rc genhtml_branch_coverage=1 00:16:38.768 --rc genhtml_function_coverage=1 00:16:38.768 --rc genhtml_legend=1 00:16:38.768 --rc geninfo_all_blocks=1 00:16:38.768 --rc geninfo_unexecuted_blocks=1 00:16:38.768 00:16:38.768 ' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:38.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.768 --rc genhtml_branch_coverage=1 00:16:38.768 --rc genhtml_function_coverage=1 00:16:38.768 --rc genhtml_legend=1 00:16:38.768 --rc geninfo_all_blocks=1 00:16:38.768 --rc geninfo_unexecuted_blocks=1 00:16:38.768 00:16:38.768 ' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:38.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.768 --rc genhtml_branch_coverage=1 00:16:38.768 --rc genhtml_function_coverage=1 00:16:38.768 --rc genhtml_legend=1 00:16:38.768 --rc geninfo_all_blocks=1 00:16:38.768 --rc geninfo_unexecuted_blocks=1 00:16:38.768 00:16:38.768 ' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:38.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.768 --rc genhtml_branch_coverage=1 00:16:38.768 --rc genhtml_function_coverage=1 00:16:38.768 --rc genhtml_legend=1 00:16:38.768 --rc geninfo_all_blocks=1 00:16:38.768 --rc geninfo_unexecuted_blocks=1 00:16:38.768 00:16:38.768 ' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.768 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:16:38.768 16:29:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:44.039 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:44.039 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:44.039 Found net devices under 0000:18:00.0: mlx_0_0 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:44.039 Found net devices under 0000:18:00.1: mlx_0_1 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:44.039 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:44.040 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:44.040 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:44.040 altname enp24s0f0np0 00:16:44.040 altname ens785f0np0 00:16:44.040 inet 192.168.100.8/24 scope global mlx_0_0 00:16:44.040 valid_lft forever preferred_lft forever 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:44.040 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:44.040 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:44.040 altname enp24s0f1np1 00:16:44.040 altname ens785f1np1 00:16:44.040 inet 192.168.100.9/24 scope global mlx_0_1 00:16:44.040 valid_lft forever preferred_lft forever 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:44.040 192.168.100.9' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:44.040 192.168.100.9' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:44.040 192.168.100.9' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3820691 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3820691 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3820691 ']' 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.040 [2024-12-06 16:29:38.555512] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:16:44.040 [2024-12-06 16:29:38.555556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.040 [2024-12-06 16:29:38.616139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.040 [2024-12-06 16:29:38.655126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.040 [2024-12-06 16:29:38.655165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.040 [2024-12-06 16:29:38.655171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.040 [2024-12-06 16:29:38.655176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.040 [2024-12-06 16:29:38.655181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.040 [2024-12-06 16:29:38.656406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.040 [2024-12-06 16:29:38.656422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.040 [2024-12-06 16:29:38.656507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.040 [2024-12-06 16:29:38.656509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.040 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.041 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:16:44.041 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:44.041 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.041 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.300 [2024-12-06 16:29:38.779685] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x121a0c0/0x121e5b0) succeed. 00:16:44.300 [2024-12-06 16:29:38.787891] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x121b750/0x125fc50) succeed. 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.300 Malloc0 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.300 [2024-12-06 16:29:38.993433] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.300 16:29:38 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.300 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.300 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:44.300 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.300 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.300 [ 00:16:44.300 { 00:16:44.300 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:44.300 "subtype": "Discovery", 00:16:44.300 "listen_addresses": [ 00:16:44.300 { 00:16:44.300 "trtype": "RDMA", 00:16:44.300 "adrfam": "IPv4", 00:16:44.300 "traddr": "192.168.100.8", 00:16:44.300 "trsvcid": "4420" 00:16:44.300 } 00:16:44.300 ], 00:16:44.300 "allow_any_host": true, 00:16:44.300 "hosts": [] 00:16:44.300 }, 00:16:44.300 { 00:16:44.300 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.300 "subtype": "NVMe", 00:16:44.300 "listen_addresses": [ 00:16:44.300 { 00:16:44.300 "trtype": "RDMA", 00:16:44.300 "adrfam": "IPv4", 00:16:44.300 "traddr": "192.168.100.8", 00:16:44.300 "trsvcid": "4420" 00:16:44.300 } 00:16:44.300 ], 00:16:44.300 "allow_any_host": true, 00:16:44.300 "hosts": [], 00:16:44.300 "serial_number": "SPDK00000000000001", 00:16:44.300 "model_number": "SPDK bdev Controller", 00:16:44.300 "max_namespaces": 32, 00:16:44.300 "min_cntlid": 1, 00:16:44.300 "max_cntlid": 65519, 00:16:44.300 "namespaces": [ 00:16:44.300 { 00:16:44.300 "nsid": 1, 00:16:44.300 "bdev_name": "Malloc0", 00:16:44.300 "name": "Malloc0", 00:16:44.300 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:44.300 "eui64": "ABCDEF0123456789", 00:16:44.300 "uuid": "f204d8bb-d1d5-4fd2-8807-f16814f7202d" 00:16:44.300 } 00:16:44.300 ] 00:16:44.300 } 00:16:44.300 ] 00:16:44.300 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.300 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:44.573 [2024-12-06 16:29:39.044439] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:16:44.573 [2024-12-06 16:29:39.044479] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3820865 ] 00:16:44.573 [2024-12-06 16:29:39.100230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:16:44.573 [2024-12-06 16:29:39.100297] nvme_rdma.c:2448:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:16:44.573 [2024-12-06 16:29:39.100312] nvme_rdma.c:1235:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:16:44.573 [2024-12-06 16:29:39.100315] nvme_rdma.c:1239:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:16:44.573 [2024-12-06 16:29:39.100346] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:16:44.573 [2024-12-06 16:29:39.110920] nvme_rdma.c: 456:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:16:44.573 [2024-12-06 16:29:39.120593] nvme_rdma.c:1121:nvme_rdma_connect_established: *DEBUG*: rc =0 00:16:44.573 [2024-12-06 16:29:39.120603] nvme_rdma.c:1126:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:16:44.573 [2024-12-06 16:29:39.120609] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120614] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120618] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120622] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120628] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120632] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120636] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120640] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120645] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120648] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120652] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120656] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120660] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120664] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120668] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120672] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120676] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120680] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120684] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120688] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120692] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120696] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120700] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120704] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120708] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120712] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120716] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120720] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120724] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120728] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120732] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120736] nvme_rdma.c:1140:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:16:44.573 [2024-12-06 16:29:39.120740] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: rc =0 00:16:44.573 [2024-12-06 16:29:39.120743] nvme_rdma.c:1148:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:16:44.573 [2024-12-06 16:29:39.120763] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.120774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd0c0 len:0x400 key:0x182100 00:16:44.573 [2024-12-06 16:29:39.126379] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.573 [2024-12-06 16:29:39.126387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:16:44.573 [2024-12-06 16:29:39.126392] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.126397] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:44.573 [2024-12-06 16:29:39.126403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:16:44.573 [2024-12-06 16:29:39.126408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:16:44.573 [2024-12-06 16:29:39.126419] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.126426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.573 [2024-12-06 16:29:39.126446] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.573 [2024-12-06 16:29:39.126450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:16:44.573 [2024-12-06 16:29:39.126455] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:16:44.573 [2024-12-06 16:29:39.126459] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.126464] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:16:44.573 [2024-12-06 16:29:39.126469] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.126475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.573 [2024-12-06 16:29:39.126494] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.573 [2024-12-06 16:29:39.126498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:16:44.573 [2024-12-06 16:29:39.126503] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:16:44.573 [2024-12-06 16:29:39.126506] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.126512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:44.573 [2024-12-06 16:29:39.126517] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.126523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.573 [2024-12-06 16:29:39.126536] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.573 [2024-12-06 16:29:39.126541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:44.573 [2024-12-06 16:29:39.126545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:44.573 [2024-12-06 16:29:39.126549] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.126555] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.573 [2024-12-06 16:29:39.126561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.573 [2024-12-06 16:29:39.126576] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.126580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.126584] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:44.574 [2024-12-06 16:29:39.126588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:44.574 [2024-12-06 16:29:39.126592] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:44.574 [2024-12-06 16:29:39.126703] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:16:44.574 [2024-12-06 16:29:39.126707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:44.574 [2024-12-06 16:29:39.126714] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.574 [2024-12-06 16:29:39.126737] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.126741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.126746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:44.574 [2024-12-06 16:29:39.126749] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126755] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.574 [2024-12-06 16:29:39.126776] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.126780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.126784] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:44.574 [2024-12-06 16:29:39.126788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:44.574 [2024-12-06 16:29:39.126791] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126796] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:16:44.574 [2024-12-06 16:29:39.126802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:44.574 [2024-12-06 16:29:39.126810] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182100 00:16:44.574 [2024-12-06 16:29:39.126851] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.126856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.126863] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:16:44.574 [2024-12-06 16:29:39.126866] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:16:44.574 [2024-12-06 16:29:39.126870] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:16:44.574 [2024-12-06 16:29:39.126874] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:16:44.574 [2024-12-06 16:29:39.126877] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:16:44.574 [2024-12-06 16:29:39.126881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:16:44.574 [2024-12-06 16:29:39.126885] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:44.574 [2024-12-06 16:29:39.126895] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.574 [2024-12-06 16:29:39.126922] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.126926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.126935] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce3c0 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.574 [2024-12-06 16:29:39.126945] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce500 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.574 [2024-12-06 16:29:39.126955] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.574 [2024-12-06 16:29:39.126964] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.574 [2024-12-06 16:29:39.126973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:44.574 [2024-12-06 16:29:39.126976] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:44.574 [2024-12-06 16:29:39.126987] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.126993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.574 [2024-12-06 16:29:39.127007] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.127011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.127015] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:16:44.574 [2024-12-06 16:29:39.127021] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:16:44.574 [2024-12-06 16:29:39.127025] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.127032] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.127037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182100 00:16:44.574 [2024-12-06 16:29:39.127064] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.127069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.127073] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.127080] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:16:44.574 [2024-12-06 16:29:39.127102] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.127108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x400 key:0x182100 00:16:44.574 [2024-12-06 16:29:39.127114] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.127119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.574 [2024-12-06 16:29:39.127135] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.127139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.127148] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cea00 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.127153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0xc00 key:0x182100 00:16:44.574 [2024-12-06 16:29:39.127157] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.127162] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.127165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.127169] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.127183] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.127187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.127194] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.127200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x8 key:0x182100 00:16:44.574 [2024-12-06 16:29:39.127204] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182100 00:16:44.574 [2024-12-06 16:29:39.127223] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.574 [2024-12-06 16:29:39.127227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:44.574 [2024-12-06 16:29:39.127234] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182100 00:16:44.574 ===================================================== 00:16:44.575 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:44.575 ===================================================== 00:16:44.575 Controller Capabilities/Features 00:16:44.575 ================================ 00:16:44.575 Vendor ID: 0000 00:16:44.575 Subsystem Vendor ID: 0000 00:16:44.575 Serial Number: .................... 00:16:44.575 Model Number: ........................................ 00:16:44.575 Firmware Version: 25.01 00:16:44.575 Recommended Arb Burst: 0 00:16:44.575 IEEE OUI Identifier: 00 00 00 00:16:44.575 Multi-path I/O 00:16:44.575 May have multiple subsystem ports: No 00:16:44.575 May have multiple controllers: No 00:16:44.575 Associated with SR-IOV VF: No 00:16:44.575 Max Data Transfer Size: 131072 00:16:44.575 Max Number of Namespaces: 0 00:16:44.575 Max Number of I/O Queues: 1024 00:16:44.575 NVMe Specification Version (VS): 1.3 00:16:44.575 NVMe Specification Version (Identify): 1.3 00:16:44.575 Maximum Queue Entries: 128 00:16:44.575 Contiguous Queues Required: Yes 00:16:44.575 Arbitration Mechanisms Supported 00:16:44.575 Weighted Round Robin: Not Supported 00:16:44.575 Vendor Specific: Not Supported 00:16:44.575 Reset Timeout: 15000 ms 00:16:44.575 Doorbell Stride: 4 bytes 00:16:44.575 NVM Subsystem Reset: Not Supported 00:16:44.575 Command Sets Supported 00:16:44.575 NVM Command Set: Supported 00:16:44.575 Boot Partition: Not Supported 00:16:44.575 Memory Page Size Minimum: 4096 bytes 00:16:44.575 Memory Page Size Maximum: 4096 bytes 00:16:44.575 Persistent Memory Region: Not Supported 00:16:44.575 Optional Asynchronous Events Supported 00:16:44.575 Namespace Attribute Notices: Not Supported 00:16:44.575 Firmware Activation Notices: Not Supported 00:16:44.575 ANA Change Notices: Not Supported 00:16:44.575 PLE Aggregate Log Change Notices: Not Supported 00:16:44.575 LBA Status Info Alert Notices: Not Supported 00:16:44.575 EGE Aggregate Log Change Notices: Not Supported 00:16:44.575 Normal NVM Subsystem Shutdown event: Not Supported 00:16:44.575 Zone Descriptor Change Notices: Not Supported 00:16:44.575 Discovery Log Change Notices: Supported 00:16:44.575 Controller Attributes 00:16:44.575 128-bit Host Identifier: Not Supported 00:16:44.575 Non-Operational Permissive Mode: Not Supported 00:16:44.575 NVM Sets: Not Supported 00:16:44.575 Read Recovery Levels: Not Supported 00:16:44.575 Endurance Groups: Not Supported 00:16:44.575 Predictable Latency Mode: Not Supported 00:16:44.575 Traffic Based Keep ALive: Not Supported 00:16:44.575 Namespace Granularity: Not Supported 00:16:44.575 SQ Associations: Not Supported 00:16:44.575 UUID List: Not Supported 00:16:44.575 Multi-Domain Subsystem: Not Supported 00:16:44.575 Fixed Capacity Management: Not Supported 00:16:44.575 Variable Capacity Management: Not Supported 00:16:44.575 Delete Endurance Group: Not Supported 00:16:44.575 Delete NVM Set: Not Supported 00:16:44.575 Extended LBA Formats Supported: Not Supported 00:16:44.575 Flexible Data Placement Supported: Not Supported 00:16:44.575 00:16:44.575 Controller Memory Buffer Support 00:16:44.575 ================================ 00:16:44.575 Supported: No 00:16:44.575 00:16:44.575 Persistent Memory Region Support 00:16:44.575 ================================ 00:16:44.575 Supported: No 00:16:44.575 00:16:44.575 Admin Command Set Attributes 00:16:44.575 ============================ 00:16:44.575 Security Send/Receive: Not Supported 00:16:44.575 Format NVM: Not Supported 00:16:44.575 Firmware Activate/Download: Not Supported 00:16:44.575 Namespace Management: Not Supported 00:16:44.575 Device Self-Test: Not Supported 00:16:44.575 Directives: Not Supported 00:16:44.575 NVMe-MI: Not Supported 00:16:44.575 Virtualization Management: Not Supported 00:16:44.575 Doorbell Buffer Config: Not Supported 00:16:44.575 Get LBA Status Capability: Not Supported 00:16:44.575 Command & Feature Lockdown Capability: Not Supported 00:16:44.575 Abort Command Limit: 1 00:16:44.575 Async Event Request Limit: 4 00:16:44.575 Number of Firmware Slots: N/A 00:16:44.575 Firmware Slot 1 Read-Only: N/A 00:16:44.575 Firmware Activation Without Reset: N/A 00:16:44.575 Multiple Update Detection Support: N/A 00:16:44.575 Firmware Update Granularity: No Information Provided 00:16:44.575 Per-Namespace SMART Log: No 00:16:44.575 Asymmetric Namespace Access Log Page: Not Supported 00:16:44.575 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:44.575 Command Effects Log Page: Not Supported 00:16:44.575 Get Log Page Extended Data: Supported 00:16:44.575 Telemetry Log Pages: Not Supported 00:16:44.575 Persistent Event Log Pages: Not Supported 00:16:44.575 Supported Log Pages Log Page: May Support 00:16:44.575 Commands Supported & Effects Log Page: Not Supported 00:16:44.575 Feature Identifiers & Effects Log Page:May Support 00:16:44.575 NVMe-MI Commands & Effects Log Page: May Support 00:16:44.575 Data Area 4 for Telemetry Log: Not Supported 00:16:44.575 Error Log Page Entries Supported: 128 00:16:44.575 Keep Alive: Not Supported 00:16:44.575 00:16:44.575 NVM Command Set Attributes 00:16:44.575 ========================== 00:16:44.575 Submission Queue Entry Size 00:16:44.575 Max: 1 00:16:44.575 Min: 1 00:16:44.575 Completion Queue Entry Size 00:16:44.575 Max: 1 00:16:44.575 Min: 1 00:16:44.575 Number of Namespaces: 0 00:16:44.575 Compare Command: Not Supported 00:16:44.575 Write Uncorrectable Command: Not Supported 00:16:44.575 Dataset Management Command: Not Supported 00:16:44.575 Write Zeroes Command: Not Supported 00:16:44.575 Set Features Save Field: Not Supported 00:16:44.575 Reservations: Not Supported 00:16:44.575 Timestamp: Not Supported 00:16:44.575 Copy: Not Supported 00:16:44.575 Volatile Write Cache: Not Present 00:16:44.575 Atomic Write Unit (Normal): 1 00:16:44.575 Atomic Write Unit (PFail): 1 00:16:44.575 Atomic Compare & Write Unit: 1 00:16:44.575 Fused Compare & Write: Supported 00:16:44.575 Scatter-Gather List 00:16:44.575 SGL Command Set: Supported 00:16:44.575 SGL Keyed: Supported 00:16:44.575 SGL Bit Bucket Descriptor: Not Supported 00:16:44.575 SGL Metadata Pointer: Not Supported 00:16:44.575 Oversized SGL: Not Supported 00:16:44.575 SGL Metadata Address: Not Supported 00:16:44.575 SGL Offset: Supported 00:16:44.575 Transport SGL Data Block: Not Supported 00:16:44.575 Replay Protected Memory Block: Not Supported 00:16:44.575 00:16:44.575 Firmware Slot Information 00:16:44.575 ========================= 00:16:44.575 Active slot: 0 00:16:44.575 00:16:44.575 00:16:44.575 Error Log 00:16:44.575 ========= 00:16:44.575 00:16:44.575 Active Namespaces 00:16:44.575 ================= 00:16:44.575 Discovery Log Page 00:16:44.575 ================== 00:16:44.575 Generation Counter: 2 00:16:44.575 Number of Records: 2 00:16:44.575 Record Format: 0 00:16:44.575 00:16:44.575 Discovery Log Entry 0 00:16:44.575 ---------------------- 00:16:44.575 Transport Type: 1 (RDMA) 00:16:44.575 Address Family: 1 (IPv4) 00:16:44.575 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:44.575 Entry Flags: 00:16:44.575 Duplicate Returned Information: 1 00:16:44.575 Explicit Persistent Connection Support for Discovery: 1 00:16:44.575 Transport Requirements: 00:16:44.575 Secure Channel: Not Required 00:16:44.575 Port ID: 0 (0x0000) 00:16:44.575 Controller ID: 65535 (0xffff) 00:16:44.575 Admin Max SQ Size: 128 00:16:44.575 Transport Service Identifier: 4420 00:16:44.575 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:44.575 Transport Address: 192.168.100.8 00:16:44.575 Transport Specific Address Subtype - RDMA 00:16:44.575 RDMA QP Service Type: 1 (Reliable Connected) 00:16:44.575 RDMA Provider Type: 1 (No provider specified) 00:16:44.575 RDMA CM Service: 1 (RDMA_CM) 00:16:44.575 Discovery Log Entry 1 00:16:44.575 ---------------------- 00:16:44.575 Transport Type: 1 (RDMA) 00:16:44.575 Address Family: 1 (IPv4) 00:16:44.575 Subsystem Type: 2 (NVM Subsystem) 00:16:44.575 Entry Flags: 00:16:44.575 Duplicate Returned Information: 0 00:16:44.575 Explicit Persistent Connection Support for Discovery: 0 00:16:44.575 Transport Requirements: 00:16:44.575 Secure Channel: Not Required 00:16:44.575 Port ID: 0 (0x0000) 00:16:44.575 Controller ID: 65535 (0xffff) 00:16:44.575 Admin Max SQ Size: [2024-12-06 16:29:39.127291] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:16:44.575 [2024-12-06 16:29:39.127298] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47481 doesn't match qid 00:16:44.576 [2024-12-06 16:29:39.127310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32718 cdw0:4d67be0 sqhd:e740 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127314] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47481 doesn't match qid 00:16:44.576 [2024-12-06 16:29:39.127320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32718 cdw0:4d67be0 sqhd:e740 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127324] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47481 doesn't match qid 00:16:44.576 [2024-12-06 16:29:39.127329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32718 cdw0:4d67be0 sqhd:e740 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127333] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47481 doesn't match qid 00:16:44.576 [2024-12-06 16:29:39.127338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32718 cdw0:4d67be0 sqhd:e740 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127345] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127365] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127379] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127388] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127406] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127417] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:16:44.576 [2024-12-06 16:29:39.127421] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:16:44.576 [2024-12-06 16:29:39.127425] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127431] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127456] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127464] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127473] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127496] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127505] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127511] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127535] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127543] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127550] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127574] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127582] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127589] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127611] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127619] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127626] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127653] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127661] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127667] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127687] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127695] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127702] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127724] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127732] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127738] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127765] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127773] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127780] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127801] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127809] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127815] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127841] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127849] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127855] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127880] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127888] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127894] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127915] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:16:44.576 [2024-12-06 16:29:39.127926] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127933] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.576 [2024-12-06 16:29:39.127938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.576 [2024-12-06 16:29:39.127954] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.576 [2024-12-06 16:29:39.127958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.127962] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.127968] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.127974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.127988] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.127992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.127996] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128002] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128025] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128033] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128039] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128062] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128070] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128076] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128103] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128112] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128118] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128141] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128150] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128156] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128176] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128184] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128190] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128212] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128220] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128226] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128244] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128252] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128259] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128283] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128291] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128297] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128321] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128329] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128336] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128360] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128369] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128378] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128402] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128410] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128416] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128439] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128447] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128453] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128473] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128481] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128487] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.577 [2024-12-06 16:29:39.128511] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.577 [2024-12-06 16:29:39.128515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:16:44.577 [2024-12-06 16:29:39.128519] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182100 00:16:44.577 [2024-12-06 16:29:39.128525] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128546] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128554] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128560] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128582] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128591] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128597] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128622] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128630] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128636] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128656] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128664] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128670] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128690] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128698] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128704] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128728] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128736] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128742] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128766] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128774] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128780] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128805] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128813] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128820] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128847] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128854] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128861] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128888] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128896] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128902] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128921] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128929] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128935] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128959] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.128967] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128973] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.128979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.128994] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.128998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.129002] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129008] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.129034] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.129037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.129041] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129048] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.129072] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.129076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.129080] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129086] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.129105] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.129109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.129113] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129120] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.129139] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.129143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.129147] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129153] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.578 [2024-12-06 16:29:39.129177] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.578 [2024-12-06 16:29:39.129181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:16:44.578 [2024-12-06 16:29:39.129185] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182100 00:16:44.578 [2024-12-06 16:29:39.129191] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129215] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129223] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129229] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129251] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129259] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129265] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129291] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129299] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129305] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129328] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129335] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129341] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129366] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129373] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129384] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129402] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129410] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129416] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129439] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129447] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129453] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129478] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129486] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129493] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129515] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129523] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129529] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129549] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129557] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129563] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129582] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129590] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129597] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129616] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129624] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129630] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129654] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129662] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129669] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129692] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129700] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129706] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129727] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129735] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129741] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129765] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129773] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129779] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129804] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129812] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129818] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.579 [2024-12-06 16:29:39.129824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.579 [2024-12-06 16:29:39.129842] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.579 [2024-12-06 16:29:39.129846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:16:44.579 [2024-12-06 16:29:39.129850] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.129856] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.129862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.129883] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.129887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.129892] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.129899] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.129904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.129923] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.129927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.129931] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.129937] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.129942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.129958] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.129962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.129966] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.129972] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.129977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.129999] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.130007] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130013] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.130032] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.130040] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130046] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.130070] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.130078] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130084] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.130108] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.130117] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130124] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.130145] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.130153] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130159] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.130178] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.130186] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130192] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.130218] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.130226] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130232] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.130254] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.130262] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130268] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.130294] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.130301] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130308] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.130332] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.130341] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130347] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.130352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.130369] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.130373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.134382] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.134389] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.134395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.580 [2024-12-06 16:29:39.134410] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.580 [2024-12-06 16:29:39.134414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0001 p:0 m:0 dnr:0 00:16:44.580 [2024-12-06 16:29:39.134418] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182100 00:16:44.580 [2024-12-06 16:29:39.134423] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:16:44.580 128 00:16:44.580 Transport Service Identifier: 4420 00:16:44.580 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:44.580 Transport Address: 192.168.100.8 00:16:44.580 Transport Specific Address Subtype - RDMA 00:16:44.580 RDMA QP Service Type: 1 (Reliable Connected) 00:16:44.580 RDMA Provider Type: 1 (No provider specified) 00:16:44.580 RDMA CM Service: 1 (RDMA_CM) 00:16:44.580 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:44.580 [2024-12-06 16:29:39.202344] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:16:44.580 [2024-12-06 16:29:39.202388] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3820867 ] 00:16:44.580 [2024-12-06 16:29:39.256584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:16:44.580 [2024-12-06 16:29:39.256643] nvme_rdma.c:2448:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:16:44.580 [2024-12-06 16:29:39.256657] nvme_rdma.c:1235:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:16:44.580 [2024-12-06 16:29:39.256660] nvme_rdma.c:1239:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:16:44.581 [2024-12-06 16:29:39.256686] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:16:44.581 [2024-12-06 16:29:39.270916] nvme_rdma.c: 456:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:16:44.581 [2024-12-06 16:29:39.285661] nvme_rdma.c:1121:nvme_rdma_connect_established: *DEBUG*: rc =0 00:16:44.581 [2024-12-06 16:29:39.285675] nvme_rdma.c:1126:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:16:44.581 [2024-12-06 16:29:39.285681] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285686] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285690] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285694] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285698] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285702] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285707] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285710] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285715] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285719] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285723] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285727] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285731] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285735] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285739] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285743] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285747] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285751] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285755] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285759] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285763] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285767] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285771] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285775] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285779] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285783] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285786] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285791] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285795] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285799] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285802] nvme_rdma.c: 909:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285807] nvme_rdma.c:1140:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:16:44.581 [2024-12-06 16:29:39.285811] nvme_rdma.c:1143:nvme_rdma_connect_established: *DEBUG*: rc =0 00:16:44.581 [2024-12-06 16:29:39.285814] nvme_rdma.c:1148:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:16:44.581 [2024-12-06 16:29:39.285828] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.581 [2024-12-06 16:29:39.285839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd0c0 len:0x400 key:0x182100 00:16:44.844 [2024-12-06 16:29:39.291379] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.844 [2024-12-06 16:29:39.291386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:16:44.844 [2024-12-06 16:29:39.291392] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291397] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:44.845 [2024-12-06 16:29:39.291402] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:16:44.845 [2024-12-06 16:29:39.291407] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:16:44.845 [2024-12-06 16:29:39.291416] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.845 [2024-12-06 16:29:39.291450] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.845 [2024-12-06 16:29:39.291454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:16:44.845 [2024-12-06 16:29:39.291458] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:16:44.845 [2024-12-06 16:29:39.291462] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291466] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:16:44.845 [2024-12-06 16:29:39.291472] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.845 [2024-12-06 16:29:39.291495] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.845 [2024-12-06 16:29:39.291499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:16:44.845 [2024-12-06 16:29:39.291503] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:16:44.845 [2024-12-06 16:29:39.291507] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:44.845 [2024-12-06 16:29:39.291517] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.845 [2024-12-06 16:29:39.291547] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.845 [2024-12-06 16:29:39.291551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:44.845 [2024-12-06 16:29:39.291555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:44.845 [2024-12-06 16:29:39.291561] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291567] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.845 [2024-12-06 16:29:39.291592] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.845 [2024-12-06 16:29:39.291596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:44.845 [2024-12-06 16:29:39.291600] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:44.845 [2024-12-06 16:29:39.291603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:44.845 [2024-12-06 16:29:39.291607] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:44.845 [2024-12-06 16:29:39.291718] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:16:44.845 [2024-12-06 16:29:39.291722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:44.845 [2024-12-06 16:29:39.291728] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.845 [2024-12-06 16:29:39.291753] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.845 [2024-12-06 16:29:39.291757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:44.845 [2024-12-06 16:29:39.291761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:44.845 [2024-12-06 16:29:39.291765] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291771] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.845 [2024-12-06 16:29:39.291792] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.845 [2024-12-06 16:29:39.291796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:16:44.845 [2024-12-06 16:29:39.291800] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:44.845 [2024-12-06 16:29:39.291804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:44.845 [2024-12-06 16:29:39.291807] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:16:44.845 [2024-12-06 16:29:39.291821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:44.845 [2024-12-06 16:29:39.291828] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182100 00:16:44.845 [2024-12-06 16:29:39.291874] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.845 [2024-12-06 16:29:39.291878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:44.845 [2024-12-06 16:29:39.291884] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:16:44.845 [2024-12-06 16:29:39.291888] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:16:44.845 [2024-12-06 16:29:39.291891] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:16:44.845 [2024-12-06 16:29:39.291895] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:16:44.845 [2024-12-06 16:29:39.291898] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:16:44.845 [2024-12-06 16:29:39.291902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:16:44.845 [2024-12-06 16:29:39.291906] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:44.845 [2024-12-06 16:29:39.291916] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.845 [2024-12-06 16:29:39.291946] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.845 [2024-12-06 16:29:39.291950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:44.845 [2024-12-06 16:29:39.291957] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce3c0 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.845 [2024-12-06 16:29:39.291967] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce500 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.845 [2024-12-06 16:29:39.291976] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.845 [2024-12-06 16:29:39.291986] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.291991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.845 [2024-12-06 16:29:39.291994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:44.845 [2024-12-06 16:29:39.291998] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.292004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:44.845 [2024-12-06 16:29:39.292009] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.845 [2024-12-06 16:29:39.292016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.845 [2024-12-06 16:29:39.292035] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292043] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:16:44.846 [2024-12-06 16:29:39.292048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292052] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292067] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.846 [2024-12-06 16:29:39.292087] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292142] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292154] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ca000 len:0x1000 key:0x182100 00:16:44.846 [2024-12-06 16:29:39.292183] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292194] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:16:44.846 [2024-12-06 16:29:39.292204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292208] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292220] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182100 00:16:44.846 [2024-12-06 16:29:39.292260] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292275] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292279] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292291] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182100 00:16:44.846 [2024-12-06 16:29:39.292319] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292332] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292360] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:16:44.846 [2024-12-06 16:29:39.292363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:16:44.846 [2024-12-06 16:29:39.292367] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:16:44.846 [2024-12-06 16:29:39.292382] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.846 [2024-12-06 16:29:39.292393] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.846 [2024-12-06 16:29:39.292406] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292414] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292421] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.846 [2024-12-06 16:29:39.292433] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292441] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292447] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292456] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292462] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.846 [2024-12-06 16:29:39.292484] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292492] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292498] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.846 [2024-12-06 16:29:39.292521] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292529] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292539] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce280 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x2000 key:0x182100 00:16:44.846 [2024-12-06 16:29:39.292551] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce8c0 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cd000 len:0x200 key:0x182100 00:16:44.846 [2024-12-06 16:29:39.292562] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003cea00 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x200 key:0x182100 00:16:44.846 [2024-12-06 16:29:39.292575] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ceb40 length 0x40 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c5000 len:0x1000 key:0x182100 00:16:44.846 [2024-12-06 16:29:39.292587] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292598] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292607] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:44.846 [2024-12-06 16:29:39.292618] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182100 00:16:44.846 [2024-12-06 16:29:39.292622] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.846 [2024-12-06 16:29:39.292625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:44.847 [2024-12-06 16:29:39.292630] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182100 00:16:44.847 [2024-12-06 16:29:39.292640] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.847 [2024-12-06 16:29:39.292644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:44.847 [2024-12-06 16:29:39.292650] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182100 00:16:44.847 ===================================================== 00:16:44.847 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:44.847 ===================================================== 00:16:44.847 Controller Capabilities/Features 00:16:44.847 ================================ 00:16:44.847 Vendor ID: 8086 00:16:44.847 Subsystem Vendor ID: 8086 00:16:44.847 Serial Number: SPDK00000000000001 00:16:44.847 Model Number: SPDK bdev Controller 00:16:44.847 Firmware Version: 25.01 00:16:44.847 Recommended Arb Burst: 6 00:16:44.847 IEEE OUI Identifier: e4 d2 5c 00:16:44.847 Multi-path I/O 00:16:44.847 May have multiple subsystem ports: Yes 00:16:44.847 May have multiple controllers: Yes 00:16:44.847 Associated with SR-IOV VF: No 00:16:44.847 Max Data Transfer Size: 131072 00:16:44.847 Max Number of Namespaces: 32 00:16:44.847 Max Number of I/O Queues: 127 00:16:44.847 NVMe Specification Version (VS): 1.3 00:16:44.847 NVMe Specification Version (Identify): 1.3 00:16:44.847 Maximum Queue Entries: 128 00:16:44.847 Contiguous Queues Required: Yes 00:16:44.847 Arbitration Mechanisms Supported 00:16:44.847 Weighted Round Robin: Not Supported 00:16:44.847 Vendor Specific: Not Supported 00:16:44.847 Reset Timeout: 15000 ms 00:16:44.847 Doorbell Stride: 4 bytes 00:16:44.847 NVM Subsystem Reset: Not Supported 00:16:44.847 Command Sets Supported 00:16:44.847 NVM Command Set: Supported 00:16:44.847 Boot Partition: Not Supported 00:16:44.847 Memory Page Size Minimum: 4096 bytes 00:16:44.847 Memory Page Size Maximum: 4096 bytes 00:16:44.847 Persistent Memory Region: Not Supported 00:16:44.847 Optional Asynchronous Events Supported 00:16:44.847 Namespace Attribute Notices: Supported 00:16:44.847 Firmware Activation Notices: Not Supported 00:16:44.847 ANA Change Notices: Not Supported 00:16:44.847 PLE Aggregate Log Change Notices: Not Supported 00:16:44.847 LBA Status Info Alert Notices: Not Supported 00:16:44.847 EGE Aggregate Log Change Notices: Not Supported 00:16:44.847 Normal NVM Subsystem Shutdown event: Not Supported 00:16:44.847 Zone Descriptor Change Notices: Not Supported 00:16:44.847 Discovery Log Change Notices: Not Supported 00:16:44.847 Controller Attributes 00:16:44.847 128-bit Host Identifier: Supported 00:16:44.847 Non-Operational Permissive Mode: Not Supported 00:16:44.847 NVM Sets: Not Supported 00:16:44.847 Read Recovery Levels: Not Supported 00:16:44.847 Endurance Groups: Not Supported 00:16:44.847 Predictable Latency Mode: Not Supported 00:16:44.847 Traffic Based Keep ALive: Not Supported 00:16:44.847 Namespace Granularity: Not Supported 00:16:44.847 SQ Associations: Not Supported 00:16:44.847 UUID List: Not Supported 00:16:44.847 Multi-Domain Subsystem: Not Supported 00:16:44.847 Fixed Capacity Management: Not Supported 00:16:44.847 Variable Capacity Management: Not Supported 00:16:44.847 Delete Endurance Group: Not Supported 00:16:44.847 Delete NVM Set: Not Supported 00:16:44.847 Extended LBA Formats Supported: Not Supported 00:16:44.847 Flexible Data Placement Supported: Not Supported 00:16:44.847 00:16:44.847 Controller Memory Buffer Support 00:16:44.847 ================================ 00:16:44.847 Supported: No 00:16:44.847 00:16:44.847 Persistent Memory Region Support 00:16:44.847 ================================ 00:16:44.847 Supported: No 00:16:44.847 00:16:44.847 Admin Command Set Attributes 00:16:44.847 ============================ 00:16:44.847 Security Send/Receive: Not Supported 00:16:44.847 Format NVM: Not Supported 00:16:44.847 Firmware Activate/Download: Not Supported 00:16:44.847 Namespace Management: Not Supported 00:16:44.847 Device Self-Test: Not Supported 00:16:44.847 Directives: Not Supported 00:16:44.847 NVMe-MI: Not Supported 00:16:44.847 Virtualization Management: Not Supported 00:16:44.847 Doorbell Buffer Config: Not Supported 00:16:44.847 Get LBA Status Capability: Not Supported 00:16:44.847 Command & Feature Lockdown Capability: Not Supported 00:16:44.847 Abort Command Limit: 4 00:16:44.847 Async Event Request Limit: 4 00:16:44.847 Number of Firmware Slots: N/A 00:16:44.847 Firmware Slot 1 Read-Only: N/A 00:16:44.847 Firmware Activation Without Reset: N/A 00:16:44.847 Multiple Update Detection Support: N/A 00:16:44.847 Firmware Update Granularity: No Information Provided 00:16:44.847 Per-Namespace SMART Log: No 00:16:44.847 Asymmetric Namespace Access Log Page: Not Supported 00:16:44.847 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:44.847 Command Effects Log Page: Supported 00:16:44.847 Get Log Page Extended Data: Supported 00:16:44.847 Telemetry Log Pages: Not Supported 00:16:44.847 Persistent Event Log Pages: Not Supported 00:16:44.847 Supported Log Pages Log Page: May Support 00:16:44.847 Commands Supported & Effects Log Page: Not Supported 00:16:44.847 Feature Identifiers & Effects Log Page:May Support 00:16:44.847 NVMe-MI Commands & Effects Log Page: May Support 00:16:44.847 Data Area 4 for Telemetry Log: Not Supported 00:16:44.847 Error Log Page Entries Supported: 128 00:16:44.847 Keep Alive: Supported 00:16:44.847 Keep Alive Granularity: 10000 ms 00:16:44.847 00:16:44.847 NVM Command Set Attributes 00:16:44.847 ========================== 00:16:44.847 Submission Queue Entry Size 00:16:44.847 Max: 64 00:16:44.847 Min: 64 00:16:44.847 Completion Queue Entry Size 00:16:44.847 Max: 16 00:16:44.847 Min: 16 00:16:44.847 Number of Namespaces: 32 00:16:44.847 Compare Command: Supported 00:16:44.847 Write Uncorrectable Command: Not Supported 00:16:44.847 Dataset Management Command: Supported 00:16:44.847 Write Zeroes Command: Supported 00:16:44.847 Set Features Save Field: Not Supported 00:16:44.847 Reservations: Supported 00:16:44.847 Timestamp: Not Supported 00:16:44.847 Copy: Supported 00:16:44.847 Volatile Write Cache: Present 00:16:44.847 Atomic Write Unit (Normal): 1 00:16:44.847 Atomic Write Unit (PFail): 1 00:16:44.847 Atomic Compare & Write Unit: 1 00:16:44.847 Fused Compare & Write: Supported 00:16:44.847 Scatter-Gather List 00:16:44.847 SGL Command Set: Supported 00:16:44.847 SGL Keyed: Supported 00:16:44.847 SGL Bit Bucket Descriptor: Not Supported 00:16:44.847 SGL Metadata Pointer: Not Supported 00:16:44.847 Oversized SGL: Not Supported 00:16:44.847 SGL Metadata Address: Not Supported 00:16:44.847 SGL Offset: Supported 00:16:44.847 Transport SGL Data Block: Not Supported 00:16:44.847 Replay Protected Memory Block: Not Supported 00:16:44.847 00:16:44.847 Firmware Slot Information 00:16:44.847 ========================= 00:16:44.847 Active slot: 1 00:16:44.847 Slot 1 Firmware Revision: 25.01 00:16:44.847 00:16:44.847 00:16:44.847 Commands Supported and Effects 00:16:44.847 ============================== 00:16:44.847 Admin Commands 00:16:44.847 -------------- 00:16:44.847 Get Log Page (02h): Supported 00:16:44.847 Identify (06h): Supported 00:16:44.847 Abort (08h): Supported 00:16:44.847 Set Features (09h): Supported 00:16:44.847 Get Features (0Ah): Supported 00:16:44.847 Asynchronous Event Request (0Ch): Supported 00:16:44.847 Keep Alive (18h): Supported 00:16:44.847 I/O Commands 00:16:44.847 ------------ 00:16:44.847 Flush (00h): Supported LBA-Change 00:16:44.847 Write (01h): Supported LBA-Change 00:16:44.847 Read (02h): Supported 00:16:44.847 Compare (05h): Supported 00:16:44.847 Write Zeroes (08h): Supported LBA-Change 00:16:44.847 Dataset Management (09h): Supported LBA-Change 00:16:44.847 Copy (19h): Supported LBA-Change 00:16:44.847 00:16:44.847 Error Log 00:16:44.847 ========= 00:16:44.847 00:16:44.847 Arbitration 00:16:44.847 =========== 00:16:44.847 Arbitration Burst: 1 00:16:44.847 00:16:44.847 Power Management 00:16:44.847 ================ 00:16:44.847 Number of Power States: 1 00:16:44.847 Current Power State: Power State #0 00:16:44.847 Power State #0: 00:16:44.847 Max Power: 0.00 W 00:16:44.847 Non-Operational State: Operational 00:16:44.847 Entry Latency: Not Reported 00:16:44.847 Exit Latency: Not Reported 00:16:44.847 Relative Read Throughput: 0 00:16:44.847 Relative Read Latency: 0 00:16:44.847 Relative Write Throughput: 0 00:16:44.847 Relative Write Latency: 0 00:16:44.847 Idle Power: Not Reported 00:16:44.847 Active Power: Not Reported 00:16:44.847 Non-Operational Permissive Mode: Not Supported 00:16:44.848 00:16:44.848 Health Information 00:16:44.848 ================== 00:16:44.848 Critical Warnings: 00:16:44.848 Available Spare Space: OK 00:16:44.848 Temperature: OK 00:16:44.848 Device Reliability: OK 00:16:44.848 Read Only: No 00:16:44.848 Volatile Memory Backup: OK 00:16:44.848 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:44.848 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:44.848 Available Spare: 0% 00:16:44.848 Available Spare Threshold: 0% 00:16:44.848 Life Percentage [2024-12-06 16:29:39.292719] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ceb40 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.292745] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.292749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.292753] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292774] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:16:44.848 [2024-12-06 16:29:39.292781] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 669 doesn't match qid 00:16:44.848 [2024-12-06 16:29:39.292792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32657 cdw0:36575e90 sqhd:2740 p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.292796] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 669 doesn't match qid 00:16:44.848 [2024-12-06 16:29:39.292802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32657 cdw0:36575e90 sqhd:2740 p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.292806] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 669 doesn't match qid 00:16:44.848 [2024-12-06 16:29:39.292811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32657 cdw0:36575e90 sqhd:2740 p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.292815] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 669 doesn't match qid 00:16:44.848 [2024-12-06 16:29:39.292820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32657 cdw0:36575e90 sqhd:2740 p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.292826] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce780 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.292849] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.292853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.292858] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.292869] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292889] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.292894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.292898] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:16:44.848 [2024-12-06 16:29:39.292902] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:16:44.848 [2024-12-06 16:29:39.292906] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292912] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.292937] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.292941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.292945] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292952] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.292973] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.292977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.292982] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292988] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.292994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.293017] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.293020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.293024] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293031] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.293054] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.293058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.293062] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293068] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.293094] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.293098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.293103] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293109] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.293132] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.293136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.293140] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293146] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.293175] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.293178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.293183] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293189] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.293211] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.293215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.293219] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293226] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.293248] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.848 [2024-12-06 16:29:39.293252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:16:44.848 [2024-12-06 16:29:39.293256] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293263] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.848 [2024-12-06 16:29:39.293268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.848 [2024-12-06 16:29:39.293292] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293300] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293306] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293332] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293341] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293347] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293373] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293386] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293393] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293415] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293424] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293430] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293454] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293462] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293469] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293492] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293500] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293506] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293529] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293537] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293543] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293568] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293577] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293584] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293606] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293615] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293621] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293644] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293652] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293659] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293683] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293691] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293697] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293720] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293728] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293734] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293755] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293763] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293769] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293791] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293800] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293806] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293830] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293838] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293844] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.849 [2024-12-06 16:29:39.293849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.849 [2024-12-06 16:29:39.293867] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.849 [2024-12-06 16:29:39.293870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:16:44.849 [2024-12-06 16:29:39.293875] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.293881] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.293886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.293909] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.293913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.293917] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.293923] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.293929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.293943] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.293947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.293951] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.293957] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.293963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.293977] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.293981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.293985] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.293991] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.293997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294010] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294019] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294025] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294052] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294060] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294066] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294093] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294101] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294108] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294127] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294135] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294141] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294167] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294175] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294181] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294204] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294212] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294218] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294240] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294248] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294255] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294279] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294286] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9f0 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294293] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294327] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294335] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd540 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294341] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294367] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294378] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd568 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294385] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294406] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294414] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd590 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294420] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294441] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294449] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5b8 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294455] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294476] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294484] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd5e0 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294491] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294512] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294520] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd608 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294526] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.850 [2024-12-06 16:29:39.294549] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.850 [2024-12-06 16:29:39.294553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:16:44.850 [2024-12-06 16:29:39.294557] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd630 length 0x10 lkey 0x182100 00:16:44.850 [2024-12-06 16:29:39.294563] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294586] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294594] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd658 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294600] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294624] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294632] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd680 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294638] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294661] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294669] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6a8 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294678] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294700] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294708] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6d0 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294714] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294741] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294749] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd6f8 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294756] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294781] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294789] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd720 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294795] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294816] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294824] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd748 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294831] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294856] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294864] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd770 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294871] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294896] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294904] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd798 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294912] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294933] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294941] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7c0 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294947] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.294972] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.294976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.294980] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd7e8 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294987] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.294992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.295006] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.295010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.295014] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd810 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295021] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.295042] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.295046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.295050] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd838 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295056] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.295083] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.295087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.295091] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd860 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295097] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.295122] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.295126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.295131] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd888 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295137] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.295160] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.295164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.295168] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8b0 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295174] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.295199] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.295203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.295207] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd8d8 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295214] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.851 [2024-12-06 16:29:39.295235] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.851 [2024-12-06 16:29:39.295239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:16:44.851 [2024-12-06 16:29:39.295243] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd900 length 0x10 lkey 0x182100 00:16:44.851 [2024-12-06 16:29:39.295249] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.852 [2024-12-06 16:29:39.295254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.852 [2024-12-06 16:29:39.295274] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.852 [2024-12-06 16:29:39.295278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:16:44.852 [2024-12-06 16:29:39.295282] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd928 length 0x10 lkey 0x182100 00:16:44.852 [2024-12-06 16:29:39.295289] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.852 [2024-12-06 16:29:39.295294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.852 [2024-12-06 16:29:39.295308] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.852 [2024-12-06 16:29:39.295312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:16:44.852 [2024-12-06 16:29:39.295316] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd950 length 0x10 lkey 0x182100 00:16:44.852 [2024-12-06 16:29:39.295323] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.852 [2024-12-06 16:29:39.295328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.852 [2024-12-06 16:29:39.295345] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.852 [2024-12-06 16:29:39.295349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:16:44.852 [2024-12-06 16:29:39.295354] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd978 length 0x10 lkey 0x182100 00:16:44.852 [2024-12-06 16:29:39.295361] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.852 [2024-12-06 16:29:39.295366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.852 [2024-12-06 16:29:39.299378] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.852 [2024-12-06 16:29:39.299384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:16:44.852 [2024-12-06 16:29:39.299388] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9a0 length 0x10 lkey 0x182100 00:16:44.852 [2024-12-06 16:29:39.299395] nvme_rdma.c:2503:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003ce640 length 0x40 lkey 0x182100 00:16:44.852 [2024-12-06 16:29:39.299400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:44.852 [2024-12-06 16:29:39.299417] nvme_rdma.c:2781:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:16:44.852 [2024-12-06 16:29:39.299421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001c p:0 m:0 dnr:0 00:16:44.852 [2024-12-06 16:29:39.299425] nvme_rdma.c:2674:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cd9c8 length 0x10 lkey 0x182100 00:16:44.852 [2024-12-06 16:29:39.299430] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:16:44.852 Used: 0% 00:16:44.852 Data Units Read: 0 00:16:44.852 Data Units Written: 0 00:16:44.852 Host Read Commands: 0 00:16:44.852 Host Write Commands: 0 00:16:44.852 Controller Busy Time: 0 minutes 00:16:44.852 Power Cycles: 0 00:16:44.852 Power On Hours: 0 hours 00:16:44.852 Unsafe Shutdowns: 0 00:16:44.852 Unrecoverable Media Errors: 0 00:16:44.852 Lifetime Error Log Entries: 0 00:16:44.852 Warning Temperature Time: 0 minutes 00:16:44.852 Critical Temperature Time: 0 minutes 00:16:44.852 00:16:44.852 Number of Queues 00:16:44.852 ================ 00:16:44.852 Number of I/O Submission Queues: 127 00:16:44.852 Number of I/O Completion Queues: 127 00:16:44.852 00:16:44.852 Active Namespaces 00:16:44.852 ================= 00:16:44.852 Namespace ID:1 00:16:44.852 Error Recovery Timeout: Unlimited 00:16:44.852 Command Set Identifier: NVM (00h) 00:16:44.852 Deallocate: Supported 00:16:44.852 Deallocated/Unwritten Error: Not Supported 00:16:44.852 Deallocated Read Value: Unknown 00:16:44.852 Deallocate in Write Zeroes: Not Supported 00:16:44.852 Deallocated Guard Field: 0xFFFF 00:16:44.852 Flush: Supported 00:16:44.852 Reservation: Supported 00:16:44.852 Namespace Sharing Capabilities: Multiple Controllers 00:16:44.852 Size (in LBAs): 131072 (0GiB) 00:16:44.852 Capacity (in LBAs): 131072 (0GiB) 00:16:44.852 Utilization (in LBAs): 131072 (0GiB) 00:16:44.852 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:44.852 EUI64: ABCDEF0123456789 00:16:44.852 UUID: f204d8bb-d1d5-4fd2-8807-f16814f7202d 00:16:44.852 Thin Provisioning: Not Supported 00:16:44.852 Per-NS Atomic Units: Yes 00:16:44.852 Atomic Boundary Size (Normal): 0 00:16:44.852 Atomic Boundary Size (PFail): 0 00:16:44.852 Atomic Boundary Offset: 0 00:16:44.852 Maximum Single Source Range Length: 65535 00:16:44.852 Maximum Copy Length: 65535 00:16:44.852 Maximum Source Range Count: 1 00:16:44.852 NGUID/EUI64 Never Reused: No 00:16:44.852 Namespace Write Protected: No 00:16:44.852 Number of LBA Formats: 1 00:16:44.852 Current LBA Format: LBA Format #00 00:16:44.852 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:44.852 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:44.852 rmmod nvme_rdma 00:16:44.852 rmmod nvme_fabrics 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3820691 ']' 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3820691 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3820691 ']' 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3820691 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3820691 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3820691' 00:16:44.852 killing process with pid 3820691 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3820691 00:16:44.852 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3820691 00:16:45.112 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:45.112 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:45.112 00:16:45.112 real 0m6.546s 00:16:45.112 user 0m5.388s 00:16:45.112 sys 0m4.326s 00:16:45.112 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.112 16:29:39 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.112 ************************************ 00:16:45.112 END TEST nvmf_identify 00:16:45.112 ************************************ 00:16:45.112 16:29:39 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:16:45.112 16:29:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:45.112 16:29:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:45.112 16:29:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.112 ************************************ 00:16:45.112 START TEST nvmf_perf 00:16:45.112 ************************************ 00:16:45.112 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:16:45.112 * Looking for test storage... 00:16:45.112 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:45.112 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:16:45.372 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:45.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.373 --rc genhtml_branch_coverage=1 00:16:45.373 --rc genhtml_function_coverage=1 00:16:45.373 --rc genhtml_legend=1 00:16:45.373 --rc geninfo_all_blocks=1 00:16:45.373 --rc geninfo_unexecuted_blocks=1 00:16:45.373 00:16:45.373 ' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:45.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.373 --rc genhtml_branch_coverage=1 00:16:45.373 --rc genhtml_function_coverage=1 00:16:45.373 --rc genhtml_legend=1 00:16:45.373 --rc geninfo_all_blocks=1 00:16:45.373 --rc geninfo_unexecuted_blocks=1 00:16:45.373 00:16:45.373 ' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:45.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.373 --rc genhtml_branch_coverage=1 00:16:45.373 --rc genhtml_function_coverage=1 00:16:45.373 --rc genhtml_legend=1 00:16:45.373 --rc geninfo_all_blocks=1 00:16:45.373 --rc geninfo_unexecuted_blocks=1 00:16:45.373 00:16:45.373 ' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:45.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.373 --rc genhtml_branch_coverage=1 00:16:45.373 --rc genhtml_function_coverage=1 00:16:45.373 --rc genhtml_legend=1 00:16:45.373 --rc geninfo_all_blocks=1 00:16:45.373 --rc geninfo_unexecuted_blocks=1 00:16:45.373 00:16:45.373 ' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:45.373 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:16:45.373 16:29:39 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:51.957 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:51.957 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:51.957 Found net devices under 0000:18:00.0: mlx_0_0 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:51.957 Found net devices under 0000:18:00.1: mlx_0_1 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:51.957 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:51.958 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:51.958 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:51.958 altname enp24s0f0np0 00:16:51.958 altname ens785f0np0 00:16:51.958 inet 192.168.100.8/24 scope global mlx_0_0 00:16:51.958 valid_lft forever preferred_lft forever 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:51.958 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:51.958 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:51.958 altname enp24s0f1np1 00:16:51.958 altname ens785f1np1 00:16:51.958 inet 192.168.100.9/24 scope global mlx_0_1 00:16:51.958 valid_lft forever preferred_lft forever 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:51.958 192.168.100.9' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:51.958 192.168.100.9' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:51.958 192.168.100.9' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3824127 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3824127 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3824127 ']' 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.958 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:51.958 [2024-12-06 16:29:45.677295] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:16:51.959 [2024-12-06 16:29:45.677341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.959 [2024-12-06 16:29:45.732992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.959 [2024-12-06 16:29:45.771157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.959 [2024-12-06 16:29:45.771191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.959 [2024-12-06 16:29:45.771198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.959 [2024-12-06 16:29:45.771203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.959 [2024-12-06 16:29:45.771207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.959 [2024-12-06 16:29:45.772430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.959 [2024-12-06 16:29:45.772448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.959 [2024-12-06 16:29:45.772533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.959 [2024-12-06 16:29:45.772535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.959 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.959 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:16:51.959 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:51.959 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:51.959 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:51.959 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.959 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:51.959 16:29:45 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:16:54.482 16:29:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:16:54.482 16:29:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:54.482 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:16:54.482 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:54.740 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:54.740 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:16:54.740 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:54.740 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:16:54.740 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:16:54.998 [2024-12-06 16:29:49.484286] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:16:54.998 [2024-12-06 16:29:49.503854] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x79a800/0x670580) succeed. 00:16:54.998 [2024-12-06 16:29:49.513299] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x79bd00/0x6f0240) succeed. 00:16:54.998 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:55.255 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:55.255 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:55.255 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:55.255 16:29:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:55.512 16:29:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:55.770 [2024-12-06 16:29:50.332214] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:55.770 16:29:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:56.027 16:29:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:16:56.027 16:29:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:16:56.027 16:29:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:56.027 16:29:50 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:16:57.402 Initializing NVMe Controllers 00:16:57.402 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:16:57.402 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:16:57.402 Initialization complete. Launching workers. 00:16:57.402 ======================================================== 00:16:57.402 Latency(us) 00:16:57.402 Device Information : IOPS MiB/s Average min max 00:16:57.402 PCIE (0000:d8:00.0) NSID 1 from core 0: 107170.64 418.64 298.17 31.87 4542.56 00:16:57.402 ======================================================== 00:16:57.402 Total : 107170.64 418.64 298.17 31.87 4542.56 00:16:57.402 00:16:57.402 16:29:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:00.678 Initializing NVMe Controllers 00:17:00.678 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:00.678 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:00.678 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:00.678 Initialization complete. Launching workers. 00:17:00.678 ======================================================== 00:17:00.678 Latency(us) 00:17:00.678 Device Information : IOPS MiB/s Average min max 00:17:00.678 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6965.94 27.21 143.35 47.57 5055.80 00:17:00.678 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5390.61 21.06 185.30 64.33 5074.67 00:17:00.678 ======================================================== 00:17:00.678 Total : 12356.55 48.27 161.65 47.57 5074.67 00:17:00.678 00:17:00.678 16:29:55 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:03.950 Initializing NVMe Controllers 00:17:03.950 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:03.950 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:03.950 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:03.950 Initialization complete. Launching workers. 00:17:03.950 ======================================================== 00:17:03.950 Latency(us) 00:17:03.950 Device Information : IOPS MiB/s Average min max 00:17:03.950 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19273.98 75.29 1660.36 470.18 5439.56 00:17:03.950 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7979.54 7749.54 8192.89 00:17:03.950 ======================================================== 00:17:03.950 Total : 23305.98 91.04 2753.59 470.18 8192.89 00:17:03.950 00:17:03.950 16:29:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:17:03.950 16:29:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:09.211 Initializing NVMe Controllers 00:17:09.211 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:09.211 Controller IO queue size 128, less than required. 00:17:09.211 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:09.211 Controller IO queue size 128, less than required. 00:17:09.211 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:09.211 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:09.211 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:09.211 Initialization complete. Launching workers. 00:17:09.211 ======================================================== 00:17:09.211 Latency(us) 00:17:09.211 Device Information : IOPS MiB/s Average min max 00:17:09.211 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4176.10 1044.02 30807.07 14115.90 70373.44 00:17:09.211 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4202.04 1050.51 30196.70 10552.44 48606.31 00:17:09.211 ======================================================== 00:17:09.211 Total : 8378.14 2094.53 30500.94 10552.44 70373.44 00:17:09.211 00:17:09.211 16:30:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:17:09.211 No valid NVMe controllers or AIO or URING devices found 00:17:09.211 Initializing NVMe Controllers 00:17:09.211 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:09.211 Controller IO queue size 128, less than required. 00:17:09.211 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:09.211 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:09.211 Controller IO queue size 128, less than required. 00:17:09.211 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:09.211 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:09.211 WARNING: Some requested NVMe devices were skipped 00:17:09.211 16:30:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:17:13.391 Initializing NVMe Controllers 00:17:13.391 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:13.391 Controller IO queue size 128, less than required. 00:17:13.391 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:13.391 Controller IO queue size 128, less than required. 00:17:13.391 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:13.391 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:13.391 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:13.391 Initialization complete. Launching workers. 00:17:13.391 00:17:13.391 ==================== 00:17:13.391 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:13.391 RDMA transport: 00:17:13.392 dev name: mlx5_0 00:17:13.392 polls: 428493 00:17:13.392 idle_polls: 424245 00:17:13.392 completions: 47462 00:17:13.392 queued_requests: 1 00:17:13.392 total_send_wrs: 23731 00:17:13.392 send_doorbell_updates: 4011 00:17:13.392 total_recv_wrs: 23858 00:17:13.392 recv_doorbell_updates: 4012 00:17:13.392 --------------------------------- 00:17:13.392 00:17:13.392 ==================== 00:17:13.392 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:13.392 RDMA transport: 00:17:13.392 dev name: mlx5_0 00:17:13.392 polls: 432113 00:17:13.392 idle_polls: 431843 00:17:13.392 completions: 21038 00:17:13.392 queued_requests: 1 00:17:13.392 total_send_wrs: 10519 00:17:13.392 send_doorbell_updates: 253 00:17:13.392 total_recv_wrs: 10646 00:17:13.392 recv_doorbell_updates: 254 00:17:13.392 --------------------------------- 00:17:13.392 ======================================================== 00:17:13.392 Latency(us) 00:17:13.392 Device Information : IOPS MiB/s Average min max 00:17:13.392 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5931.43 1482.86 21583.21 8305.66 52599.51 00:17:13.392 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2629.03 657.26 48746.27 27659.50 71945.24 00:17:13.392 ======================================================== 00:17:13.392 Total : 8560.46 2140.11 29925.34 8305.66 71945.24 00:17:13.392 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:13.392 rmmod nvme_rdma 00:17:13.392 rmmod nvme_fabrics 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3824127 ']' 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3824127 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3824127 ']' 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3824127 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3824127 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3824127' 00:17:13.392 killing process with pid 3824127 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3824127 00:17:13.392 16:30:07 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3824127 00:17:17.569 16:30:11 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:17.569 16:30:11 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:17.569 00:17:17.569 real 0m32.044s 00:17:17.569 user 1m45.844s 00:17:17.569 sys 0m5.636s 00:17:17.569 16:30:11 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.569 16:30:11 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:17.569 ************************************ 00:17:17.569 END TEST nvmf_perf 00:17:17.569 ************************************ 00:17:17.570 16:30:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:17:17.570 16:30:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:17.570 16:30:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.570 16:30:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.570 ************************************ 00:17:17.570 START TEST nvmf_fio_host 00:17:17.570 ************************************ 00:17:17.570 16:30:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:17:17.570 * Looking for test storage... 00:17:17.570 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:17.570 16:30:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:17.570 16:30:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:17.570 16:30:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:17.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.570 --rc genhtml_branch_coverage=1 00:17:17.570 --rc genhtml_function_coverage=1 00:17:17.570 --rc genhtml_legend=1 00:17:17.570 --rc geninfo_all_blocks=1 00:17:17.570 --rc geninfo_unexecuted_blocks=1 00:17:17.570 00:17:17.570 ' 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:17.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.570 --rc genhtml_branch_coverage=1 00:17:17.570 --rc genhtml_function_coverage=1 00:17:17.570 --rc genhtml_legend=1 00:17:17.570 --rc geninfo_all_blocks=1 00:17:17.570 --rc geninfo_unexecuted_blocks=1 00:17:17.570 00:17:17.570 ' 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:17.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.570 --rc genhtml_branch_coverage=1 00:17:17.570 --rc genhtml_function_coverage=1 00:17:17.570 --rc genhtml_legend=1 00:17:17.570 --rc geninfo_all_blocks=1 00:17:17.570 --rc geninfo_unexecuted_blocks=1 00:17:17.570 00:17:17.570 ' 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:17.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.570 --rc genhtml_branch_coverage=1 00:17:17.570 --rc genhtml_function_coverage=1 00:17:17.570 --rc genhtml_legend=1 00:17:17.570 --rc geninfo_all_blocks=1 00:17:17.570 --rc geninfo_unexecuted_blocks=1 00:17:17.570 00:17:17.570 ' 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.570 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:17.571 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:17:17.571 16:30:12 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:22.834 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:22.835 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:22.835 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:22.835 Found net devices under 0000:18:00.0: mlx_0_0 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:22.835 Found net devices under 0000:18:00.1: mlx_0_1 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:22.835 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.835 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:22.835 altname enp24s0f0np0 00:17:22.835 altname ens785f0np0 00:17:22.835 inet 192.168.100.8/24 scope global mlx_0_0 00:17:22.835 valid_lft forever preferred_lft forever 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:22.835 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.835 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:22.835 altname enp24s0f1np1 00:17:22.835 altname ens785f1np1 00:17:22.835 inet 192.168.100.9/24 scope global mlx_0_1 00:17:22.835 valid_lft forever preferred_lft forever 00:17:22.835 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:23.093 192.168.100.9' 00:17:23.093 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:23.094 192.168.100.9' 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:23.094 192.168.100.9' 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3832770 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3832770 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3832770 ']' 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.094 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.094 [2024-12-06 16:30:17.696393] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:17:23.094 [2024-12-06 16:30:17.696437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.094 [2024-12-06 16:30:17.753742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:23.094 [2024-12-06 16:30:17.793296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.094 [2024-12-06 16:30:17.793333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.094 [2024-12-06 16:30:17.793339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.094 [2024-12-06 16:30:17.793344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.094 [2024-12-06 16:30:17.793349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.094 [2024-12-06 16:30:17.794719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.094 [2024-12-06 16:30:17.794823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.094 [2024-12-06 16:30:17.794910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.094 [2024-12-06 16:30:17.794912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.351 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.351 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:23.351 16:30:17 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:23.351 [2024-12-06 16:30:18.071351] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd470c0/0xd4b5b0) succeed. 00:17:23.608 [2024-12-06 16:30:18.079584] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd48750/0xd8cc50) succeed. 00:17:23.608 16:30:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:23.608 16:30:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:23.608 16:30:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.608 16:30:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:23.866 Malloc1 00:17:23.866 16:30:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:24.124 16:30:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:24.124 16:30:18 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:24.381 [2024-12-06 16:30:18.979795] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:24.381 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:24.639 16:30:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:17:24.896 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:24.896 fio-3.35 00:17:24.896 Starting 1 thread 00:17:27.433 00:17:27.433 test: (groupid=0, jobs=1): err= 0: pid=3833352: Fri Dec 6 16:30:21 2024 00:17:27.433 read: IOPS=18.7k, BW=72.9MiB/s (76.4MB/s)(146MiB/2003msec) 00:17:27.433 slat (nsec): min=1273, max=32047, avg=1422.18, stdev=442.04 00:17:27.433 clat (usec): min=2051, max=6168, avg=3405.63, stdev=119.61 00:17:27.433 lat (usec): min=2075, max=6169, avg=3407.06, stdev=119.56 00:17:27.433 clat percentiles (usec): 00:17:27.433 | 1.00th=[ 3359], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3392], 00:17:27.433 | 30.00th=[ 3392], 40.00th=[ 3392], 50.00th=[ 3392], 60.00th=[ 3392], 00:17:27.433 | 70.00th=[ 3392], 80.00th=[ 3425], 90.00th=[ 3425], 95.00th=[ 3425], 00:17:27.433 | 99.00th=[ 3621], 99.50th=[ 4359], 99.90th=[ 5211], 99.95th=[ 5669], 00:17:27.433 | 99.99th=[ 6128] 00:17:27.433 bw ( KiB/s): min=72504, max=75424, per=99.98%, avg=74632.00, stdev=1423.03, samples=4 00:17:27.433 iops : min=18126, max=18856, avg=18658.00, stdev=355.76, samples=4 00:17:27.433 write: IOPS=18.7k, BW=72.9MiB/s (76.4MB/s)(146MiB/2003msec); 0 zone resets 00:17:27.433 slat (nsec): min=1314, max=23317, avg=1774.79, stdev=510.42 00:17:27.433 clat (usec): min=2077, max=6151, avg=3403.35, stdev=111.76 00:17:27.433 lat (usec): min=2100, max=6152, avg=3405.12, stdev=111.72 00:17:27.433 clat percentiles (usec): 00:17:27.433 | 1.00th=[ 3359], 5.00th=[ 3359], 10.00th=[ 3392], 20.00th=[ 3392], 00:17:27.433 | 30.00th=[ 3392], 40.00th=[ 3392], 50.00th=[ 3392], 60.00th=[ 3392], 00:17:27.433 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3425], 95.00th=[ 3425], 00:17:27.433 | 99.00th=[ 3621], 99.50th=[ 4228], 99.90th=[ 5211], 99.95th=[ 5211], 00:17:27.433 | 99.99th=[ 6128] 00:17:27.433 bw ( KiB/s): min=72544, max=75456, per=99.98%, avg=74634.00, stdev=1401.95, samples=4 00:17:27.433 iops : min=18136, max=18864, avg=18658.50, stdev=350.49, samples=4 00:17:27.433 lat (msec) : 4=99.37%, 10=0.63% 00:17:27.433 cpu : usr=99.55%, sys=0.10%, ctx=15, majf=0, minf=4 00:17:27.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:27.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:27.433 issued rwts: total=37381,37379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:27.433 00:17:27.433 Run status group 0 (all jobs): 00:17:27.433 READ: bw=72.9MiB/s (76.4MB/s), 72.9MiB/s-72.9MiB/s (76.4MB/s-76.4MB/s), io=146MiB (153MB), run=2003-2003msec 00:17:27.433 WRITE: bw=72.9MiB/s (76.4MB/s), 72.9MiB/s-72.9MiB/s (76.4MB/s-76.4MB/s), io=146MiB (153MB), run=2003-2003msec 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:27.433 16:30:21 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:17:27.691 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:27.691 fio-3.35 00:17:27.691 Starting 1 thread 00:17:30.210 00:17:30.210 test: (groupid=0, jobs=1): err= 0: pid=3833858: Fri Dec 6 16:30:24 2024 00:17:30.210 read: IOPS=15.0k, BW=235MiB/s (246MB/s)(462MiB/1965msec) 00:17:30.210 slat (nsec): min=2118, max=38149, avg=2405.95, stdev=846.17 00:17:30.210 clat (usec): min=475, max=8063, avg=1627.84, stdev=1360.04 00:17:30.210 lat (usec): min=477, max=8078, avg=1630.24, stdev=1360.30 00:17:30.210 clat percentiles (usec): 00:17:30.210 | 1.00th=[ 644], 5.00th=[ 734], 10.00th=[ 783], 20.00th=[ 857], 00:17:30.210 | 30.00th=[ 922], 40.00th=[ 996], 50.00th=[ 1090], 60.00th=[ 1221], 00:17:30.210 | 70.00th=[ 1352], 80.00th=[ 1565], 90.00th=[ 4686], 95.00th=[ 4752], 00:17:30.210 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 7111], 99.95th=[ 7308], 00:17:30.210 | 99.99th=[ 8029] 00:17:30.210 bw ( KiB/s): min=112832, max=124128, per=49.29%, avg=118616.00, stdev=4698.98, samples=4 00:17:30.210 iops : min= 7052, max= 7758, avg=7413.50, stdev=293.69, samples=4 00:17:30.210 write: IOPS=8828, BW=138MiB/s (145MB/s)(241MiB/1749msec); 0 zone resets 00:17:30.210 slat (usec): min=24, max=113, avg=27.94, stdev= 4.79 00:17:30.210 clat (usec): min=3776, max=18927, avg=11815.34, stdev=1647.44 00:17:30.210 lat (usec): min=3802, max=18956, avg=11843.29, stdev=1647.42 00:17:30.210 clat percentiles (usec): 00:17:30.210 | 1.00th=[ 7439], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:17:30.210 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:17:30.210 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13698], 95.00th=[14353], 00:17:30.210 | 99.00th=[16057], 99.50th=[16712], 99.90th=[18482], 99.95th=[18744], 00:17:30.210 | 99.99th=[18744] 00:17:30.210 bw ( KiB/s): min=120832, max=128640, per=87.45%, avg=123528.00, stdev=3524.49, samples=4 00:17:30.210 iops : min= 7552, max= 8040, avg=7720.50, stdev=220.28, samples=4 00:17:30.210 lat (usec) : 500=0.01%, 750=4.32%, 1000=22.61% 00:17:30.210 lat (msec) : 2=28.27%, 4=2.20%, 10=12.15%, 20=30.44% 00:17:30.210 cpu : usr=96.66%, sys=1.50%, ctx=204, majf=0, minf=4 00:17:30.210 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:17:30.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:30.210 issued rwts: total=29553,15441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.210 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:30.210 00:17:30.210 Run status group 0 (all jobs): 00:17:30.210 READ: bw=235MiB/s (246MB/s), 235MiB/s-235MiB/s (246MB/s-246MB/s), io=462MiB (484MB), run=1965-1965msec 00:17:30.210 WRITE: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=241MiB (253MB), run=1749-1749msec 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:30.210 rmmod nvme_rdma 00:17:30.210 rmmod nvme_fabrics 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3832770 ']' 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3832770 00:17:30.210 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3832770 ']' 00:17:30.211 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3832770 00:17:30.211 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:17:30.211 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.211 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3832770 00:17:30.211 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.211 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.211 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3832770' 00:17:30.211 killing process with pid 3832770 00:17:30.211 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3832770 00:17:30.211 16:30:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3832770 00:17:30.468 16:30:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.468 16:30:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:30.468 00:17:30.468 real 0m13.234s 00:17:30.468 user 0m52.011s 00:17:30.468 sys 0m5.099s 00:17:30.468 16:30:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.468 16:30:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.468 ************************************ 00:17:30.468 END TEST nvmf_fio_host 00:17:30.468 ************************************ 00:17:30.468 16:30:25 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:17:30.468 16:30:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:30.468 16:30:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.468 16:30:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.468 ************************************ 00:17:30.468 START TEST nvmf_failover 00:17:30.468 ************************************ 00:17:30.468 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:17:30.725 * Looking for test storage... 00:17:30.726 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:30.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.726 --rc genhtml_branch_coverage=1 00:17:30.726 --rc genhtml_function_coverage=1 00:17:30.726 --rc genhtml_legend=1 00:17:30.726 --rc geninfo_all_blocks=1 00:17:30.726 --rc geninfo_unexecuted_blocks=1 00:17:30.726 00:17:30.726 ' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:30.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.726 --rc genhtml_branch_coverage=1 00:17:30.726 --rc genhtml_function_coverage=1 00:17:30.726 --rc genhtml_legend=1 00:17:30.726 --rc geninfo_all_blocks=1 00:17:30.726 --rc geninfo_unexecuted_blocks=1 00:17:30.726 00:17:30.726 ' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:30.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.726 --rc genhtml_branch_coverage=1 00:17:30.726 --rc genhtml_function_coverage=1 00:17:30.726 --rc genhtml_legend=1 00:17:30.726 --rc geninfo_all_blocks=1 00:17:30.726 --rc geninfo_unexecuted_blocks=1 00:17:30.726 00:17:30.726 ' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:30.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.726 --rc genhtml_branch_coverage=1 00:17:30.726 --rc genhtml_function_coverage=1 00:17:30.726 --rc genhtml_legend=1 00:17:30.726 --rc geninfo_all_blocks=1 00:17:30.726 --rc geninfo_unexecuted_blocks=1 00:17:30.726 00:17:30.726 ' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.726 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:17:30.726 16:30:25 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:35.985 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:35.985 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:35.985 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:35.986 Found net devices under 0000:18:00.0: mlx_0_0 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:35.986 Found net devices under 0000:18:00.1: mlx_0_1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:35.986 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:35.986 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:35.986 altname enp24s0f0np0 00:17:35.986 altname ens785f0np0 00:17:35.986 inet 192.168.100.8/24 scope global mlx_0_0 00:17:35.986 valid_lft forever preferred_lft forever 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:35.986 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:35.986 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:35.986 altname enp24s0f1np1 00:17:35.986 altname ens785f1np1 00:17:35.986 inet 192.168.100.9/24 scope global mlx_0_1 00:17:35.986 valid_lft forever preferred_lft forever 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:35.986 192.168.100.9' 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:35.986 192.168.100.9' 00:17:35.986 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:35.987 192.168.100.9' 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3837609 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3837609 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3837609 ']' 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:35.987 [2024-12-06 16:30:30.350876] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:17:35.987 [2024-12-06 16:30:30.350921] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.987 [2024-12-06 16:30:30.407682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.987 [2024-12-06 16:30:30.446402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.987 [2024-12-06 16:30:30.446434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.987 [2024-12-06 16:30:30.446440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.987 [2024-12-06 16:30:30.446446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.987 [2024-12-06 16:30:30.446451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.987 [2024-12-06 16:30:30.447677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.987 [2024-12-06 16:30:30.447758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.987 [2024-12-06 16:30:30.447760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.987 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:36.245 [2024-12-06 16:30:30.749457] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa22800/0xa26cf0) succeed. 00:17:36.245 [2024-12-06 16:30:30.757536] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa23df0/0xa68390) succeed. 00:17:36.245 16:30:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:36.504 Malloc0 00:17:36.504 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.761 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.761 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:37.019 [2024-12-06 16:30:31.596963] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:37.019 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:17:37.276 [2024-12-06 16:30:31.781304] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:17:37.276 [2024-12-06 16:30:31.965924] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3837902 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3837902 /var/tmp/bdevperf.sock 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3837902 ']' 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.276 16:30:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:37.533 16:30:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.533 16:30:32 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:37.533 16:30:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:37.790 NVMe0n1 00:17:37.790 16:30:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:38.047 00:17:38.047 16:30:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3837930 00:17:38.047 16:30:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:38.047 16:30:32 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:39.061 16:30:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:39.383 16:30:33 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:42.656 16:30:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:42.656 00:17:42.656 16:30:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:17:42.656 16:30:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:45.934 16:30:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:45.934 [2024-12-06 16:30:40.512405] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:45.934 16:30:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:46.867 16:30:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:17:47.124 16:30:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3837930 00:17:53.677 { 00:17:53.677 "results": [ 00:17:53.677 { 00:17:53.677 "job": "NVMe0n1", 00:17:53.677 "core_mask": "0x1", 00:17:53.677 "workload": "verify", 00:17:53.677 "status": "finished", 00:17:53.677 "verify_range": { 00:17:53.677 "start": 0, 00:17:53.677 "length": 16384 00:17:53.677 }, 00:17:53.677 "queue_depth": 128, 00:17:53.677 "io_size": 4096, 00:17:53.677 "runtime": 15.006075, 00:17:53.677 "iops": 15139.201956540934, 00:17:53.677 "mibps": 59.13750764273802, 00:17:53.677 "io_failed": 5149, 00:17:53.677 "io_timeout": 0, 00:17:53.677 "avg_latency_us": 8247.929660667989, 00:17:53.677 "min_latency_us": 332.2311111111111, 00:17:53.677 "max_latency_us": 1012846.7437037037 00:17:53.677 } 00:17:53.677 ], 00:17:53.677 "core_count": 1 00:17:53.677 } 00:17:53.677 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3837902 00:17:53.677 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3837902 ']' 00:17:53.677 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3837902 00:17:53.677 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:17:53.677 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.677 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3837902 00:17:53.677 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.677 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.677 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3837902' 00:17:53.677 killing process with pid 3837902 00:17:53.678 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3837902 00:17:53.678 16:30:47 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3837902 00:17:53.678 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:53.678 [2024-12-06 16:30:32.024592] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:17:53.678 [2024-12-06 16:30:32.024639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3837902 ] 00:17:53.678 [2024-12-06 16:30:32.082917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.678 [2024-12-06 16:30:32.121330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.678 Running I/O for 15 seconds... 00:17:53.678 18944.00 IOPS, 74.00 MiB/s [2024-12-06T15:30:48.406Z] 10146.00 IOPS, 39.63 MiB/s [2024-12-06T15:30:48.406Z] [2024-12-06 16:30:34.875264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181200 00:17:53.678 [2024-12-06 16:30:34.875745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.678 [2024-12-06 16:30:34.875753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.875988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.875994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.876007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.876020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.876033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.876047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.876060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.876076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.876089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.876103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.876116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x181200 00:17:53.679 [2024-12-06 16:30:34.876129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.679 [2024-12-06 16:30:34.876143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.679 [2024-12-06 16:30:34.876156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.679 [2024-12-06 16:30:34.876170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.679 [2024-12-06 16:30:34.876183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.679 [2024-12-06 16:30:34.876196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.679 [2024-12-06 16:30:34.876211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.679 [2024-12-06 16:30:34.876224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.679 [2024-12-06 16:30:34.876238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.679 [2024-12-06 16:30:34.876245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.680 [2024-12-06 16:30:34.876768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.680 [2024-12-06 16:30:34.876774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.876991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.876996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.877004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.877009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.877018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:34.877024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.878833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.681 [2024-12-06 16:30:34.878844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.681 [2024-12-06 16:30:34.878850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32280 len:8 PRP1 0x0 PRP2 0x0 00:17:53.681 [2024-12-06 16:30:34.878856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:34.878899] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:17:53.681 [2024-12-06 16:30:34.878908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:17:53.681 [2024-12-06 16:30:34.881503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:53.681 [2024-12-06 16:30:34.895275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:17:53.681 [2024-12-06 16:30:34.935987] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:17:53.681 12274.67 IOPS, 47.95 MiB/s [2024-12-06T15:30:48.409Z] 14006.50 IOPS, 54.71 MiB/s [2024-12-06T15:30:48.409Z] 13143.20 IOPS, 51.34 MiB/s [2024-12-06T15:30:48.409Z] [2024-12-06 16:30:38.325882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x180f00 00:17:53.681 [2024-12-06 16:30:38.325917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.325933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x180f00 00:17:53.681 [2024-12-06 16:30:38.325940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.325948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x180f00 00:17:53.681 [2024-12-06 16:30:38.325955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.325963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x180f00 00:17:53.681 [2024-12-06 16:30:38.325969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.325976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x180f00 00:17:53.681 [2024-12-06 16:30:38.325982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.325990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x180f00 00:17:53.681 [2024-12-06 16:30:38.325995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.326003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x180f00 00:17:53.681 [2024-12-06 16:30:38.326009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.326016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180f00 00:17:53.681 [2024-12-06 16:30:38.326022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.326030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:38.326036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.326047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:38.326054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.326061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:38.326067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.326074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:38.326080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.326088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:38.326095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.326102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.681 [2024-12-06 16:30:38.326108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.681 [2024-12-06 16:30:38.326115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.682 [2024-12-06 16:30:38.326466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.682 [2024-12-06 16:30:38.326556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180f00 00:17:53.682 [2024-12-06 16:30:38.326562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180f00 00:17:53.683 [2024-12-06 16:30:38.326901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.326914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.326927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.326940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.326953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.326967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.326980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.326987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.326992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.327000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.327005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.327014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.327020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.327028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.327034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.327041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.327046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.327054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.683 [2024-12-06 16:30:38.327060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.683 [2024-12-06 16:30:38.327067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.684 [2024-12-06 16:30:38.327326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.684 [2024-12-06 16:30:38.327571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180f00 00:17:53.684 [2024-12-06 16:30:38.327577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:38.327584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180f00 00:17:53.685 [2024-12-06 16:30:38.327590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:38.327598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180f00 00:17:53.685 [2024-12-06 16:30:38.327604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:38.327611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180f00 00:17:53.685 [2024-12-06 16:30:38.327617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:38.327625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180f00 00:17:53.685 [2024-12-06 16:30:38.327631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:38.327638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180f00 00:17:53.685 [2024-12-06 16:30:38.327645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:38.329522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.685 [2024-12-06 16:30:38.329533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.685 [2024-12-06 16:30:38.329540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16208 len:8 PRP1 0x0 PRP2 0x0 00:17:53.685 [2024-12-06 16:30:38.329546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:38.329585] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:17:53.685 [2024-12-06 16:30:38.329593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:17:53.685 [2024-12-06 16:30:38.332184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:17:53.685 [2024-12-06 16:30:38.345492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:17:53.685 [2024-12-06 16:30:38.381911] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:17:53.685 12319.33 IOPS, 48.12 MiB/s [2024-12-06T15:30:48.413Z] 13318.00 IOPS, 52.02 MiB/s [2024-12-06T15:30:48.413Z] 14062.88 IOPS, 54.93 MiB/s [2024-12-06T15:30:48.413Z] 14412.00 IOPS, 56.30 MiB/s [2024-12-06T15:30:48.413Z] [2024-12-06 16:30:42.712285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181200 00:17:53.685 [2024-12-06 16:30:42.712401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181200 00:17:53.685 [2024-12-06 16:30:42.712415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181200 00:17:53.685 [2024-12-06 16:30:42.712433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181200 00:17:53.685 [2024-12-06 16:30:42.712446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181200 00:17:53.685 [2024-12-06 16:30:42.712460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181200 00:17:53.685 [2024-12-06 16:30:42.712473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181200 00:17:53.685 [2024-12-06 16:30:42.712487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181200 00:17:53.685 [2024-12-06 16:30:42.712500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.685 [2024-12-06 16:30:42.712697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.685 [2024-12-06 16:30:42.712703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.686 [2024-12-06 16:30:42.712715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.686 [2024-12-06 16:30:42.712728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.712988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.712995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.713001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.713015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.713028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.713042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.713055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.686 [2024-12-06 16:30:42.713069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.686 [2024-12-06 16:30:42.713082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.686 [2024-12-06 16:30:42.713095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.713109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.713122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181200 00:17:53.686 [2024-12-06 16:30:42.713135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.686 [2024-12-06 16:30:42.713143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.687 [2024-12-06 16:30:42.713593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181200 00:17:53.687 [2024-12-06 16:30:42.713650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.687 [2024-12-06 16:30:42.713658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.713908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.688 [2024-12-06 16:30:42.713921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.688 [2024-12-06 16:30:42.713934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.688 [2024-12-06 16:30:42.713946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.688 [2024-12-06 16:30:42.713959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.688 [2024-12-06 16:30:42.713973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.688 [2024-12-06 16:30:42.713985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.713993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.688 [2024-12-06 16:30:42.713999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.714005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.688 [2024-12-06 16:30:42.714011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.714020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.714026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.714033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x181200 00:17:53.688 [2024-12-06 16:30:42.714039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:976de000 sqhd:7210 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.715895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.688 [2024-12-06 16:30:42.715906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.688 [2024-12-06 16:30:42.715912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15896 len:8 PRP1 0x0 PRP2 0x0 00:17:53.688 [2024-12-06 16:30:42.715918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.688 [2024-12-06 16:30:42.715957] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:17:53.688 [2024-12-06 16:30:42.715966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:17:53.688 [2024-12-06 16:30:42.718550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:17:53.688 [2024-12-06 16:30:42.735482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:17:53.688 [2024-12-06 16:30:42.774179] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:17:53.688 13054.70 IOPS, 50.99 MiB/s [2024-12-06T15:30:48.416Z] 13624.36 IOPS, 53.22 MiB/s [2024-12-06T15:30:48.416Z] 14099.33 IOPS, 55.08 MiB/s [2024-12-06T15:30:48.416Z] 14499.54 IOPS, 56.64 MiB/s [2024-12-06T15:30:48.416Z] 14841.43 IOPS, 57.97 MiB/s [2024-12-06T15:30:48.416Z] 15138.40 IOPS, 59.13 MiB/s 00:17:53.688 Latency(us) 00:17:53.688 [2024-12-06T15:30:48.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.688 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:53.688 Verification LBA range: start 0x0 length 0x4000 00:17:53.688 NVMe0n1 : 15.01 15139.20 59.14 343.13 0.00 8247.93 332.23 1012846.74 00:17:53.688 [2024-12-06T15:30:48.416Z] =================================================================================================================== 00:17:53.688 [2024-12-06T15:30:48.416Z] Total : 15139.20 59.14 343.13 0.00 8247.93 332.23 1012846.74 00:17:53.688 Received shutdown signal, test time was about 15.000000 seconds 00:17:53.688 00:17:53.688 Latency(us) 00:17:53.688 [2024-12-06T15:30:48.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.689 [2024-12-06T15:30:48.417Z] =================================================================================================================== 00:17:53.689 [2024-12-06T15:30:48.417Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3840793 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3840793 /var/tmp/bdevperf.sock 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3840793 ']' 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:53.689 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:17:53.945 [2024-12-06 16:30:48.479073] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:53.945 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:17:53.945 [2024-12-06 16:30:48.663680] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:17:54.202 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:54.460 NVMe0n1 00:17:54.460 16:30:48 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:54.460 00:17:54.717 16:30:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:54.717 00:17:54.717 16:30:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:54.717 16:30:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:54.974 16:30:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.231 16:30:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:58.506 16:30:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:58.506 16:30:52 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:58.506 16:30:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3841596 00:17:58.506 16:30:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.506 16:30:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3841596 00:17:59.438 { 00:17:59.438 "results": [ 00:17:59.438 { 00:17:59.438 "job": "NVMe0n1", 00:17:59.438 "core_mask": "0x1", 00:17:59.438 "workload": "verify", 00:17:59.438 "status": "finished", 00:17:59.438 "verify_range": { 00:17:59.438 "start": 0, 00:17:59.438 "length": 16384 00:17:59.438 }, 00:17:59.438 "queue_depth": 128, 00:17:59.438 "io_size": 4096, 00:17:59.438 "runtime": 1.008789, 00:17:59.438 "iops": 18905.836602104107, 00:17:59.438 "mibps": 73.85092422696917, 00:17:59.438 "io_failed": 0, 00:17:59.438 "io_timeout": 0, 00:17:59.438 "avg_latency_us": 6734.886880437484, 00:17:59.438 "min_latency_us": 2427.259259259259, 00:17:59.438 "max_latency_us": 18058.80888888889 00:17:59.438 } 00:17:59.438 ], 00:17:59.438 "core_count": 1 00:17:59.438 } 00:17:59.438 16:30:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:59.438 [2024-12-06 16:30:48.136256] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:17:59.438 [2024-12-06 16:30:48.136304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840793 ] 00:17:59.438 [2024-12-06 16:30:48.193556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.438 [2024-12-06 16:30:48.227645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.438 [2024-12-06 16:30:49.778273] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:17:59.438 [2024-12-06 16:30:49.778873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:17:59.438 [2024-12-06 16:30:49.778909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:17:59.438 [2024-12-06 16:30:49.794810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:17:59.438 [2024-12-06 16:30:49.810857] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:17:59.438 Running I/O for 1 seconds... 00:17:59.438 18882.00 IOPS, 73.76 MiB/s 00:17:59.438 Latency(us) 00:17:59.438 [2024-12-06T15:30:54.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.438 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:59.438 Verification LBA range: start 0x0 length 0x4000 00:17:59.438 NVMe0n1 : 1.01 18905.84 73.85 0.00 0.00 6734.89 2427.26 18058.81 00:17:59.438 [2024-12-06T15:30:54.166Z] =================================================================================================================== 00:17:59.438 [2024-12-06T15:30:54.166Z] Total : 18905.84 73.85 0.00 0.00 6734.89 2427.26 18058.81 00:17:59.438 16:30:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:59.438 16:30:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:59.694 16:30:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:59.951 16:30:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:59.951 16:30:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:00.208 16:30:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:00.208 16:30:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:03.483 16:30:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:03.483 16:30:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3840793 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3840793 ']' 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3840793 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3840793 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3840793' 00:18:03.483 killing process with pid 3840793 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3840793 00:18:03.483 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3840793 00:18:03.740 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:03.740 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.740 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:03.740 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:03.740 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:03.740 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:03.740 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:03.740 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:03.740 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:03.740 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:03.997 rmmod nvme_rdma 00:18:03.997 rmmod nvme_fabrics 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3837609 ']' 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3837609 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3837609 ']' 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3837609 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3837609 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3837609' 00:18:03.997 killing process with pid 3837609 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3837609 00:18:03.997 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3837609 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:04.255 00:18:04.255 real 0m33.648s 00:18:04.255 user 1m56.503s 00:18:04.255 sys 0m5.559s 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:04.255 ************************************ 00:18:04.255 END TEST nvmf_failover 00:18:04.255 ************************************ 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.255 ************************************ 00:18:04.255 START TEST nvmf_host_discovery 00:18:04.255 ************************************ 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:18:04.255 * Looking for test storage... 00:18:04.255 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:04.255 16:30:58 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.514 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:04.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.514 --rc genhtml_branch_coverage=1 00:18:04.514 --rc genhtml_function_coverage=1 00:18:04.514 --rc genhtml_legend=1 00:18:04.514 --rc geninfo_all_blocks=1 00:18:04.514 --rc geninfo_unexecuted_blocks=1 00:18:04.514 00:18:04.515 ' 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:04.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.515 --rc genhtml_branch_coverage=1 00:18:04.515 --rc genhtml_function_coverage=1 00:18:04.515 --rc genhtml_legend=1 00:18:04.515 --rc geninfo_all_blocks=1 00:18:04.515 --rc geninfo_unexecuted_blocks=1 00:18:04.515 00:18:04.515 ' 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:04.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.515 --rc genhtml_branch_coverage=1 00:18:04.515 --rc genhtml_function_coverage=1 00:18:04.515 --rc genhtml_legend=1 00:18:04.515 --rc geninfo_all_blocks=1 00:18:04.515 --rc geninfo_unexecuted_blocks=1 00:18:04.515 00:18:04.515 ' 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:04.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.515 --rc genhtml_branch_coverage=1 00:18:04.515 --rc genhtml_function_coverage=1 00:18:04.515 --rc genhtml_legend=1 00:18:04.515 --rc geninfo_all_blocks=1 00:18:04.515 --rc geninfo_unexecuted_blocks=1 00:18:04.515 00:18:04.515 ' 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.515 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:04.515 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:18:04.515 00:18:04.515 real 0m0.188s 00:18:04.515 user 0m0.126s 00:18:04.515 sys 0m0.072s 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:04.515 ************************************ 00:18:04.515 END TEST nvmf_host_discovery 00:18:04.515 ************************************ 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.515 16:30:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.516 16:30:59 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.516 ************************************ 00:18:04.516 START TEST nvmf_host_multipath_status 00:18:04.516 ************************************ 00:18:04.516 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:18:04.516 * Looking for test storage... 00:18:04.516 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:04.516 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:04.516 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:18:04.516 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.773 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:04.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.774 --rc genhtml_branch_coverage=1 00:18:04.774 --rc genhtml_function_coverage=1 00:18:04.774 --rc genhtml_legend=1 00:18:04.774 --rc geninfo_all_blocks=1 00:18:04.774 --rc geninfo_unexecuted_blocks=1 00:18:04.774 00:18:04.774 ' 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:04.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.774 --rc genhtml_branch_coverage=1 00:18:04.774 --rc genhtml_function_coverage=1 00:18:04.774 --rc genhtml_legend=1 00:18:04.774 --rc geninfo_all_blocks=1 00:18:04.774 --rc geninfo_unexecuted_blocks=1 00:18:04.774 00:18:04.774 ' 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:04.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.774 --rc genhtml_branch_coverage=1 00:18:04.774 --rc genhtml_function_coverage=1 00:18:04.774 --rc genhtml_legend=1 00:18:04.774 --rc geninfo_all_blocks=1 00:18:04.774 --rc geninfo_unexecuted_blocks=1 00:18:04.774 00:18:04.774 ' 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:04.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.774 --rc genhtml_branch_coverage=1 00:18:04.774 --rc genhtml_function_coverage=1 00:18:04.774 --rc genhtml_legend=1 00:18:04.774 --rc geninfo_all_blocks=1 00:18:04.774 --rc geninfo_unexecuted_blocks=1 00:18:04.774 00:18:04.774 ' 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.774 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.775 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:18:04.775 16:30:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:10.036 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:10.036 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:10.036 Found net devices under 0000:18:00.0: mlx_0_0 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:10.036 Found net devices under 0000:18:00.1: mlx_0_1 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:10.036 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:10.037 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.037 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:10.037 altname enp24s0f0np0 00:18:10.037 altname ens785f0np0 00:18:10.037 inet 192.168.100.8/24 scope global mlx_0_0 00:18:10.037 valid_lft forever preferred_lft forever 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:10.037 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:10.295 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:10.295 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:10.295 altname enp24s0f1np1 00:18:10.295 altname ens785f1np1 00:18:10.295 inet 192.168.100.9/24 scope global mlx_0_1 00:18:10.295 valid_lft forever preferred_lft forever 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:10.295 192.168.100.9' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:10.295 192.168.100.9' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:10.295 192.168.100.9' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3845973 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3845973 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3845973 ']' 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.295 16:31:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:10.295 [2024-12-06 16:31:04.921697] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:18:10.295 [2024-12-06 16:31:04.921741] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.295 [2024-12-06 16:31:04.979803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:10.295 [2024-12-06 16:31:05.018174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.296 [2024-12-06 16:31:05.018209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.296 [2024-12-06 16:31:05.018217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.296 [2024-12-06 16:31:05.018223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.296 [2024-12-06 16:31:05.018228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.296 [2024-12-06 16:31:05.021395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.296 [2024-12-06 16:31:05.021400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.553 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.553 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:10.553 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.553 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.553 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:10.553 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.553 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3845973 00:18:10.553 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:10.811 [2024-12-06 16:31:05.328201] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd37940/0xd3be30) succeed. 00:18:10.811 [2024-12-06 16:31:05.336192] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd38e90/0xd7d4d0) succeed. 00:18:10.811 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:11.069 Malloc0 00:18:11.070 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:11.070 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.328 16:31:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:11.586 [2024-12-06 16:31:06.078445] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:11.586 [2024-12-06 16:31:06.242684] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3846249 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3846249 /var/tmp/bdevperf.sock 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3846249 ']' 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.586 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:11.844 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.844 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:11.844 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:12.102 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:12.360 Nvme0n1 00:18:12.360 16:31:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:12.618 Nvme0n1 00:18:12.618 16:31:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:12.618 16:31:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:14.523 16:31:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:14.523 16:31:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:18:14.782 16:31:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:15.040 16:31:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:16.000 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:16.000 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:16.000 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.000 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:16.000 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.000 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:16.000 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:16.000 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.258 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:16.258 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:16.258 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.258 16:31:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:16.515 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.515 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:16.515 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.515 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:16.773 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.773 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:16.773 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.773 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:16.773 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.773 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:16.773 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.773 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:17.031 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:17.031 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:17.031 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:17.290 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:17.290 16:31:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:18.666 16:31:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:18.666 16:31:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:18.666 16:31:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.666 16:31:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:18.666 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:18.666 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:18.666 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.666 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:18.666 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.666 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:18.666 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.666 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:18.924 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:18.924 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:18.924 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:18.924 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:19.183 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.183 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:19.183 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.183 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:19.183 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.183 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:19.183 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.183 16:31:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:19.441 16:31:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.441 16:31:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:19.441 16:31:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:19.700 16:31:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:18:19.700 16:31:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.092 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:21.350 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.350 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:21.350 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.350 16:31:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:21.608 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.608 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:21.608 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.608 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:21.608 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.608 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:21.608 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:21.608 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:21.867 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:21.867 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:21.867 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:22.129 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:18:22.129 16:31:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:23.507 16:31:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:23.507 16:31:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:23.507 16:31:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.507 16:31:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:23.507 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.507 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:23.507 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.507 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:23.507 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:23.507 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:23.507 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.507 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:23.766 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.766 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:23.766 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.766 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:24.025 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:24.025 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:24.025 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:24.025 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:24.025 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:24.025 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:24.025 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:24.025 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:24.284 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:24.284 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:24.284 16:31:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:18:24.542 16:31:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:18:24.800 16:31:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:25.737 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:25.737 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:25.737 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.737 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:25.995 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:25.995 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:25.995 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.995 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:25.995 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:25.995 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:25.995 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.995 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:26.254 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.254 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:26.254 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.254 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:26.512 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.512 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:26.512 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:26.512 16:31:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.512 16:31:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:26.512 16:31:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:26.512 16:31:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:26.512 16:31:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.770 16:31:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:26.770 16:31:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:26.770 16:31:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:18:27.029 16:31:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:27.029 16:31:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:28.405 16:31:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:28.405 16:31:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:28.405 16:31:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.405 16:31:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:28.405 16:31:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:28.405 16:31:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:28.405 16:31:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.405 16:31:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:28.405 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.405 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:28.405 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.405 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:28.663 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.663 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:28.663 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.663 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:28.922 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.922 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:28.922 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.922 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:28.922 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:28.922 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:28.922 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.922 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:29.180 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.180 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:29.438 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:29.438 16:31:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:18:29.438 16:31:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:29.696 16:31:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:30.630 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:30.630 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:30.630 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.630 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:30.888 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:30.888 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:30.888 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:30.888 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:31.295 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.295 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:31.295 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.295 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:31.295 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.295 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:31.295 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.295 16:31:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:31.613 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.613 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:31.613 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.613 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:31.613 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.613 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:31.613 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:31.613 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:31.871 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:31.871 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:31.871 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:32.129 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:32.129 16:31:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:33.063 16:31:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:33.063 16:31:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:33.063 16:31:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.064 16:31:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:33.321 16:31:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:33.321 16:31:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:33.322 16:31:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.322 16:31:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:33.580 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.580 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:33.580 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.580 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:33.839 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.839 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:33.839 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.839 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:33.839 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:33.839 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:33.839 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:33.839 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:34.098 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.098 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:34.098 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.098 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:34.356 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:34.356 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:34.357 16:31:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:34.357 16:31:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:18:34.615 16:31:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:35.550 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:35.550 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:35.550 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.550 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:35.809 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:35.809 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:35.809 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.809 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:36.068 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.068 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:36.068 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:36.068 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.068 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.068 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:36.068 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:36.068 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.327 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.327 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:36.327 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.327 16:31:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:36.586 16:31:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.586 16:31:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:36.586 16:31:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.586 16:31:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:36.586 16:31:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.586 16:31:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:36.586 16:31:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:36.844 16:31:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:18:37.103 16:31:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:38.039 16:31:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:38.039 16:31:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:38.039 16:31:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.039 16:31:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:38.299 16:31:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.299 16:31:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:38.299 16:31:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.299 16:31:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:38.299 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:38.299 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:38.299 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.299 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:38.559 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.559 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:38.559 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.559 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:38.817 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.817 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:38.817 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:38.817 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3846249 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3846249 ']' 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3846249 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3846249 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3846249' 00:18:39.076 killing process with pid 3846249 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3846249 00:18:39.076 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3846249 00:18:39.076 { 00:18:39.076 "results": [ 00:18:39.076 { 00:18:39.076 "job": "Nvme0n1", 00:18:39.077 "core_mask": "0x4", 00:18:39.077 "workload": "verify", 00:18:39.077 "status": "terminated", 00:18:39.077 "verify_range": { 00:18:39.077 "start": 0, 00:18:39.077 "length": 16384 00:18:39.077 }, 00:18:39.077 "queue_depth": 128, 00:18:39.077 "io_size": 4096, 00:18:39.077 "runtime": 26.472113, 00:18:39.077 "iops": 16632.78635898842, 00:18:39.077 "mibps": 64.97182171479851, 00:18:39.077 "io_failed": 0, 00:18:39.077 "io_timeout": 0, 00:18:39.077 "avg_latency_us": 7674.5995106859855, 00:18:39.077 "min_latency_us": 78.50666666666666, 00:18:39.077 "max_latency_us": 3007471.3125925926 00:18:39.077 } 00:18:39.077 ], 00:18:39.077 "core_count": 1 00:18:39.077 } 00:18:39.350 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3846249 00:18:39.350 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:39.350 [2024-12-06 16:31:06.303509] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:18:39.350 [2024-12-06 16:31:06.303555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846249 ] 00:18:39.350 [2024-12-06 16:31:06.359223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.350 [2024-12-06 16:31:06.398704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.350 Running I/O for 90 seconds... 00:18:39.350 19456.00 IOPS, 76.00 MiB/s [2024-12-06T15:31:34.078Z] 19617.00 IOPS, 76.63 MiB/s [2024-12-06T15:31:34.078Z] 19626.67 IOPS, 76.67 MiB/s [2024-12-06T15:31:34.078Z] 19603.00 IOPS, 76.57 MiB/s [2024-12-06T15:31:34.078Z] 19590.20 IOPS, 76.52 MiB/s [2024-12-06T15:31:34.078Z] 19605.33 IOPS, 76.58 MiB/s [2024-12-06T15:31:34.078Z] 19602.29 IOPS, 76.57 MiB/s [2024-12-06T15:31:34.078Z] 19587.50 IOPS, 76.51 MiB/s [2024-12-06T15:31:34.078Z] 19576.56 IOPS, 76.47 MiB/s [2024-12-06T15:31:34.078Z] 19568.10 IOPS, 76.44 MiB/s [2024-12-06T15:31:34.078Z] 19560.73 IOPS, 76.41 MiB/s [2024-12-06T15:31:34.078Z] [2024-12-06 16:31:19.073585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.073983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.073994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.074002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.074021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.074040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.074065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x183100 00:18:39.350 [2024-12-06 16:31:19.074087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.350 [2024-12-06 16:31:19.074108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.350 [2024-12-06 16:31:19.074131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.350 [2024-12-06 16:31:19.074152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.350 [2024-12-06 16:31:19.074174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.350 [2024-12-06 16:31:19.074195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.350 [2024-12-06 16:31:19.074216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.350 [2024-12-06 16:31:19.074238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:39.350 [2024-12-06 16:31:19.074250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.074981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.074990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:39.351 [2024-12-06 16:31:19.075002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.351 [2024-12-06 16:31:19.075011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.075981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.075997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.352 [2024-12-06 16:31:19.076631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:39.352 [2024-12-06 16:31:19.076647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.076981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.076991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:19.077287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:19.077297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:39.353 19219.50 IOPS, 75.08 MiB/s [2024-12-06T15:31:34.081Z] 17741.08 IOPS, 69.30 MiB/s [2024-12-06T15:31:34.081Z] 16473.86 IOPS, 64.35 MiB/s [2024-12-06T15:31:34.081Z] 15640.20 IOPS, 61.09 MiB/s [2024-12-06T15:31:34.081Z] 15889.62 IOPS, 62.07 MiB/s [2024-12-06T15:31:34.081Z] 16071.71 IOPS, 62.78 MiB/s [2024-12-06T15:31:34.081Z] 16058.94 IOPS, 62.73 MiB/s [2024-12-06T15:31:34.081Z] 16039.74 IOPS, 62.66 MiB/s [2024-12-06T15:31:34.081Z] 16163.35 IOPS, 63.14 MiB/s [2024-12-06T15:31:34.081Z] 16331.90 IOPS, 63.80 MiB/s [2024-12-06T15:31:34.081Z] 16460.00 IOPS, 64.30 MiB/s [2024-12-06T15:31:34.081Z] 16423.17 IOPS, 64.15 MiB/s [2024-12-06T15:31:34.081Z] 16388.50 IOPS, 64.02 MiB/s [2024-12-06T15:31:34.081Z] [2024-12-06 16:31:31.623637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:31.623674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:31.623716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183100 00:18:39.353 [2024-12-06 16:31:31.623726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:31.624239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:31.624256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:31.624274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:31.624284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:31.624296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:31.624306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:31.624318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:31.624328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:31.624341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x183100 00:18:39.353 [2024-12-06 16:31:31.624351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:31.624364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:31.624373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:31.624391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:31.624399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:31.624411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:31.624420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:39.353 [2024-12-06 16:31:31.624432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.353 [2024-12-06 16:31:31.624441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.624856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.624984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.624995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.625007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.625016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.625028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.625040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.625053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.625062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.625074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.625083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.625095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183100 00:18:39.354 [2024-12-06 16:31:31.625105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.625117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.625126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.625140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.625149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:39.354 [2024-12-06 16:31:31.625161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.354 [2024-12-06 16:31:31.625170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183100 00:18:39.355 [2024-12-06 16:31:31.625192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183100 00:18:39.355 [2024-12-06 16:31:31.625346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183100 00:18:39.355 [2024-12-06 16:31:31.625367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x183100 00:18:39.355 [2024-12-06 16:31:31.625441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x183100 00:18:39.355 [2024-12-06 16:31:31.625484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x183100 00:18:39.355 [2024-12-06 16:31:31.625526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183100 00:18:39.355 [2024-12-06 16:31:31.625582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183100 00:18:39.355 [2024-12-06 16:31:31.625604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:39.355 [2024-12-06 16:31:31.625637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:39.355 [2024-12-06 16:31:31.625647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:39.355 16467.08 IOPS, 64.32 MiB/s [2024-12-06T15:31:34.083Z] 16584.65 IOPS, 64.78 MiB/s [2024-12-06T15:31:34.083Z] Received shutdown signal, test time was about 26.472698 seconds 00:18:39.355 00:18:39.355 Latency(us) 00:18:39.355 [2024-12-06T15:31:34.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.355 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:39.355 Verification LBA range: start 0x0 length 0x4000 00:18:39.355 Nvme0n1 : 26.47 16632.79 64.97 0.00 0.00 7674.60 78.51 3007471.31 00:18:39.355 [2024-12-06T15:31:34.083Z] =================================================================================================================== 00:18:39.355 [2024-12-06T15:31:34.083Z] Total : 16632.79 64.97 0.00 0.00 7674.60 78.51 3007471.31 00:18:39.355 16:31:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:39.614 rmmod nvme_rdma 00:18:39.614 rmmod nvme_fabrics 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3845973 ']' 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3845973 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3845973 ']' 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3845973 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3845973 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3845973' 00:18:39.614 killing process with pid 3845973 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3845973 00:18:39.614 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3845973 00:18:39.873 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:39.873 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:39.873 00:18:39.873 real 0m35.334s 00:18:39.873 user 1m42.961s 00:18:39.873 sys 0m7.309s 00:18:39.873 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.873 16:31:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:39.873 ************************************ 00:18:39.873 END TEST nvmf_host_multipath_status 00:18:39.873 ************************************ 00:18:39.873 16:31:34 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:18:39.873 16:31:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:39.873 16:31:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.873 16:31:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.873 ************************************ 00:18:39.873 START TEST nvmf_discovery_remove_ifc 00:18:39.873 ************************************ 00:18:39.873 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:18:39.873 * Looking for test storage... 00:18:40.132 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:18:40.132 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:40.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.133 --rc genhtml_branch_coverage=1 00:18:40.133 --rc genhtml_function_coverage=1 00:18:40.133 --rc genhtml_legend=1 00:18:40.133 --rc geninfo_all_blocks=1 00:18:40.133 --rc geninfo_unexecuted_blocks=1 00:18:40.133 00:18:40.133 ' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:40.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.133 --rc genhtml_branch_coverage=1 00:18:40.133 --rc genhtml_function_coverage=1 00:18:40.133 --rc genhtml_legend=1 00:18:40.133 --rc geninfo_all_blocks=1 00:18:40.133 --rc geninfo_unexecuted_blocks=1 00:18:40.133 00:18:40.133 ' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:40.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.133 --rc genhtml_branch_coverage=1 00:18:40.133 --rc genhtml_function_coverage=1 00:18:40.133 --rc genhtml_legend=1 00:18:40.133 --rc geninfo_all_blocks=1 00:18:40.133 --rc geninfo_unexecuted_blocks=1 00:18:40.133 00:18:40.133 ' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:40.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.133 --rc genhtml_branch_coverage=1 00:18:40.133 --rc genhtml_function_coverage=1 00:18:40.133 --rc genhtml_legend=1 00:18:40.133 --rc geninfo_all_blocks=1 00:18:40.133 --rc geninfo_unexecuted_blocks=1 00:18:40.133 00:18:40.133 ' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.133 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:40.133 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:18:40.133 00:18:40.133 real 0m0.198s 00:18:40.133 user 0m0.117s 00:18:40.133 sys 0m0.093s 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:40.133 ************************************ 00:18:40.133 END TEST nvmf_discovery_remove_ifc 00:18:40.133 ************************************ 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.133 ************************************ 00:18:40.133 START TEST nvmf_identify_kernel_target 00:18:40.133 ************************************ 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:18:40.133 * Looking for test storage... 00:18:40.133 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:18:40.133 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:40.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.392 --rc genhtml_branch_coverage=1 00:18:40.392 --rc genhtml_function_coverage=1 00:18:40.392 --rc genhtml_legend=1 00:18:40.392 --rc geninfo_all_blocks=1 00:18:40.392 --rc geninfo_unexecuted_blocks=1 00:18:40.392 00:18:40.392 ' 00:18:40.392 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:40.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.392 --rc genhtml_branch_coverage=1 00:18:40.392 --rc genhtml_function_coverage=1 00:18:40.392 --rc genhtml_legend=1 00:18:40.392 --rc geninfo_all_blocks=1 00:18:40.393 --rc geninfo_unexecuted_blocks=1 00:18:40.393 00:18:40.393 ' 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:40.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.393 --rc genhtml_branch_coverage=1 00:18:40.393 --rc genhtml_function_coverage=1 00:18:40.393 --rc genhtml_legend=1 00:18:40.393 --rc geninfo_all_blocks=1 00:18:40.393 --rc geninfo_unexecuted_blocks=1 00:18:40.393 00:18:40.393 ' 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:40.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.393 --rc genhtml_branch_coverage=1 00:18:40.393 --rc genhtml_function_coverage=1 00:18:40.393 --rc genhtml_legend=1 00:18:40.393 --rc geninfo_all_blocks=1 00:18:40.393 --rc geninfo_unexecuted_blocks=1 00:18:40.393 00:18:40.393 ' 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.393 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:18:40.393 16:31:34 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:46.955 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:46.955 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:46.955 Found net devices under 0000:18:00.0: mlx_0_0 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:46.955 Found net devices under 0000:18:00.1: mlx_0_1 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:18:46.955 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:46.956 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:46.956 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:46.956 altname enp24s0f0np0 00:18:46.956 altname ens785f0np0 00:18:46.956 inet 192.168.100.8/24 scope global mlx_0_0 00:18:46.956 valid_lft forever preferred_lft forever 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:46.956 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:46.956 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:46.956 altname enp24s0f1np1 00:18:46.956 altname ens785f1np1 00:18:46.956 inet 192.168.100.9/24 scope global mlx_0_1 00:18:46.956 valid_lft forever preferred_lft forever 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:46.956 192.168.100.9' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:46.956 192.168.100.9' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:46.956 192.168.100.9' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:46.956 16:31:40 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:18:48.860 Waiting for block devices as requested 00:18:48.860 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:18:49.119 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:18:49.119 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:18:49.119 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:18:49.119 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:18:49.378 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:18:49.378 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:18:49.378 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:18:49.378 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:18:49.638 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:18:49.638 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:18:49.638 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:18:49.897 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:18:49.897 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:18:49.897 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:18:49.897 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:18:50.155 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:18:51.528 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:18:51.528 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:51.528 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:18:51.528 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:51.528 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:51.528 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:51.528 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:18:51.528 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:18:51.528 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:51.528 No valid GPT data, bailing 00:18:51.528 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:51.529 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:18:51.787 00:18:51.787 Discovery Log Number of Records 2, Generation counter 2 00:18:51.787 =====Discovery Log Entry 0====== 00:18:51.787 trtype: rdma 00:18:51.787 adrfam: ipv4 00:18:51.787 subtype: current discovery subsystem 00:18:51.787 treq: not specified, sq flow control disable supported 00:18:51.787 portid: 1 00:18:51.787 trsvcid: 4420 00:18:51.787 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:51.787 traddr: 192.168.100.8 00:18:51.787 eflags: none 00:18:51.787 rdma_prtype: not specified 00:18:51.787 rdma_qptype: connected 00:18:51.787 rdma_cms: rdma-cm 00:18:51.787 rdma_pkey: 0x0000 00:18:51.787 =====Discovery Log Entry 1====== 00:18:51.787 trtype: rdma 00:18:51.787 adrfam: ipv4 00:18:51.787 subtype: nvme subsystem 00:18:51.787 treq: not specified, sq flow control disable supported 00:18:51.787 portid: 1 00:18:51.787 trsvcid: 4420 00:18:51.787 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:51.787 traddr: 192.168.100.8 00:18:51.787 eflags: none 00:18:51.787 rdma_prtype: not specified 00:18:51.787 rdma_qptype: connected 00:18:51.787 rdma_cms: rdma-cm 00:18:51.787 rdma_pkey: 0x0000 00:18:51.787 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:18:51.787 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:52.047 ===================================================== 00:18:52.047 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:52.047 ===================================================== 00:18:52.047 Controller Capabilities/Features 00:18:52.047 ================================ 00:18:52.047 Vendor ID: 0000 00:18:52.047 Subsystem Vendor ID: 0000 00:18:52.048 Serial Number: a88afecae5aa8977f05c 00:18:52.048 Model Number: Linux 00:18:52.048 Firmware Version: 6.8.9-20 00:18:52.048 Recommended Arb Burst: 0 00:18:52.048 IEEE OUI Identifier: 00 00 00 00:18:52.048 Multi-path I/O 00:18:52.048 May have multiple subsystem ports: No 00:18:52.048 May have multiple controllers: No 00:18:52.048 Associated with SR-IOV VF: No 00:18:52.048 Max Data Transfer Size: Unlimited 00:18:52.048 Max Number of Namespaces: 0 00:18:52.048 Max Number of I/O Queues: 1024 00:18:52.048 NVMe Specification Version (VS): 1.3 00:18:52.048 NVMe Specification Version (Identify): 1.3 00:18:52.048 Maximum Queue Entries: 128 00:18:52.048 Contiguous Queues Required: No 00:18:52.048 Arbitration Mechanisms Supported 00:18:52.048 Weighted Round Robin: Not Supported 00:18:52.048 Vendor Specific: Not Supported 00:18:52.048 Reset Timeout: 7500 ms 00:18:52.048 Doorbell Stride: 4 bytes 00:18:52.048 NVM Subsystem Reset: Not Supported 00:18:52.048 Command Sets Supported 00:18:52.048 NVM Command Set: Supported 00:18:52.048 Boot Partition: Not Supported 00:18:52.048 Memory Page Size Minimum: 4096 bytes 00:18:52.048 Memory Page Size Maximum: 4096 bytes 00:18:52.048 Persistent Memory Region: Not Supported 00:18:52.048 Optional Asynchronous Events Supported 00:18:52.048 Namespace Attribute Notices: Not Supported 00:18:52.048 Firmware Activation Notices: Not Supported 00:18:52.048 ANA Change Notices: Not Supported 00:18:52.048 PLE Aggregate Log Change Notices: Not Supported 00:18:52.048 LBA Status Info Alert Notices: Not Supported 00:18:52.048 EGE Aggregate Log Change Notices: Not Supported 00:18:52.048 Normal NVM Subsystem Shutdown event: Not Supported 00:18:52.048 Zone Descriptor Change Notices: Not Supported 00:18:52.048 Discovery Log Change Notices: Supported 00:18:52.048 Controller Attributes 00:18:52.048 128-bit Host Identifier: Not Supported 00:18:52.048 Non-Operational Permissive Mode: Not Supported 00:18:52.048 NVM Sets: Not Supported 00:18:52.048 Read Recovery Levels: Not Supported 00:18:52.048 Endurance Groups: Not Supported 00:18:52.048 Predictable Latency Mode: Not Supported 00:18:52.048 Traffic Based Keep ALive: Not Supported 00:18:52.048 Namespace Granularity: Not Supported 00:18:52.048 SQ Associations: Not Supported 00:18:52.048 UUID List: Not Supported 00:18:52.048 Multi-Domain Subsystem: Not Supported 00:18:52.048 Fixed Capacity Management: Not Supported 00:18:52.048 Variable Capacity Management: Not Supported 00:18:52.048 Delete Endurance Group: Not Supported 00:18:52.048 Delete NVM Set: Not Supported 00:18:52.048 Extended LBA Formats Supported: Not Supported 00:18:52.048 Flexible Data Placement Supported: Not Supported 00:18:52.048 00:18:52.048 Controller Memory Buffer Support 00:18:52.048 ================================ 00:18:52.048 Supported: No 00:18:52.048 00:18:52.048 Persistent Memory Region Support 00:18:52.048 ================================ 00:18:52.048 Supported: No 00:18:52.048 00:18:52.048 Admin Command Set Attributes 00:18:52.048 ============================ 00:18:52.048 Security Send/Receive: Not Supported 00:18:52.048 Format NVM: Not Supported 00:18:52.048 Firmware Activate/Download: Not Supported 00:18:52.048 Namespace Management: Not Supported 00:18:52.048 Device Self-Test: Not Supported 00:18:52.048 Directives: Not Supported 00:18:52.048 NVMe-MI: Not Supported 00:18:52.048 Virtualization Management: Not Supported 00:18:52.048 Doorbell Buffer Config: Not Supported 00:18:52.048 Get LBA Status Capability: Not Supported 00:18:52.048 Command & Feature Lockdown Capability: Not Supported 00:18:52.048 Abort Command Limit: 1 00:18:52.048 Async Event Request Limit: 1 00:18:52.048 Number of Firmware Slots: N/A 00:18:52.048 Firmware Slot 1 Read-Only: N/A 00:18:52.048 Firmware Activation Without Reset: N/A 00:18:52.048 Multiple Update Detection Support: N/A 00:18:52.048 Firmware Update Granularity: No Information Provided 00:18:52.048 Per-Namespace SMART Log: No 00:18:52.048 Asymmetric Namespace Access Log Page: Not Supported 00:18:52.048 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:52.048 Command Effects Log Page: Not Supported 00:18:52.048 Get Log Page Extended Data: Supported 00:18:52.048 Telemetry Log Pages: Not Supported 00:18:52.048 Persistent Event Log Pages: Not Supported 00:18:52.048 Supported Log Pages Log Page: May Support 00:18:52.048 Commands Supported & Effects Log Page: Not Supported 00:18:52.048 Feature Identifiers & Effects Log Page:May Support 00:18:52.048 NVMe-MI Commands & Effects Log Page: May Support 00:18:52.048 Data Area 4 for Telemetry Log: Not Supported 00:18:52.048 Error Log Page Entries Supported: 1 00:18:52.048 Keep Alive: Not Supported 00:18:52.048 00:18:52.048 NVM Command Set Attributes 00:18:52.048 ========================== 00:18:52.048 Submission Queue Entry Size 00:18:52.048 Max: 1 00:18:52.048 Min: 1 00:18:52.048 Completion Queue Entry Size 00:18:52.048 Max: 1 00:18:52.048 Min: 1 00:18:52.048 Number of Namespaces: 0 00:18:52.048 Compare Command: Not Supported 00:18:52.048 Write Uncorrectable Command: Not Supported 00:18:52.048 Dataset Management Command: Not Supported 00:18:52.048 Write Zeroes Command: Not Supported 00:18:52.048 Set Features Save Field: Not Supported 00:18:52.048 Reservations: Not Supported 00:18:52.048 Timestamp: Not Supported 00:18:52.048 Copy: Not Supported 00:18:52.048 Volatile Write Cache: Not Present 00:18:52.048 Atomic Write Unit (Normal): 1 00:18:52.048 Atomic Write Unit (PFail): 1 00:18:52.048 Atomic Compare & Write Unit: 1 00:18:52.048 Fused Compare & Write: Not Supported 00:18:52.048 Scatter-Gather List 00:18:52.048 SGL Command Set: Supported 00:18:52.048 SGL Keyed: Supported 00:18:52.048 SGL Bit Bucket Descriptor: Not Supported 00:18:52.048 SGL Metadata Pointer: Not Supported 00:18:52.048 Oversized SGL: Not Supported 00:18:52.048 SGL Metadata Address: Not Supported 00:18:52.048 SGL Offset: Supported 00:18:52.048 Transport SGL Data Block: Not Supported 00:18:52.048 Replay Protected Memory Block: Not Supported 00:18:52.048 00:18:52.048 Firmware Slot Information 00:18:52.048 ========================= 00:18:52.048 Active slot: 0 00:18:52.048 00:18:52.048 00:18:52.048 Error Log 00:18:52.048 ========= 00:18:52.048 00:18:52.048 Active Namespaces 00:18:52.048 ================= 00:18:52.048 Discovery Log Page 00:18:52.048 ================== 00:18:52.048 Generation Counter: 2 00:18:52.048 Number of Records: 2 00:18:52.048 Record Format: 0 00:18:52.048 00:18:52.048 Discovery Log Entry 0 00:18:52.048 ---------------------- 00:18:52.048 Transport Type: 1 (RDMA) 00:18:52.048 Address Family: 1 (IPv4) 00:18:52.048 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:52.048 Entry Flags: 00:18:52.048 Duplicate Returned Information: 0 00:18:52.048 Explicit Persistent Connection Support for Discovery: 0 00:18:52.048 Transport Requirements: 00:18:52.048 Secure Channel: Not Specified 00:18:52.048 Port ID: 1 (0x0001) 00:18:52.048 Controller ID: 65535 (0xffff) 00:18:52.048 Admin Max SQ Size: 32 00:18:52.048 Transport Service Identifier: 4420 00:18:52.048 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:52.048 Transport Address: 192.168.100.8 00:18:52.048 Transport Specific Address Subtype - RDMA 00:18:52.048 RDMA QP Service Type: 1 (Reliable Connected) 00:18:52.048 RDMA Provider Type: 1 (No provider specified) 00:18:52.048 RDMA CM Service: 1 (RDMA_CM) 00:18:52.048 Discovery Log Entry 1 00:18:52.048 ---------------------- 00:18:52.048 Transport Type: 1 (RDMA) 00:18:52.048 Address Family: 1 (IPv4) 00:18:52.048 Subsystem Type: 2 (NVM Subsystem) 00:18:52.048 Entry Flags: 00:18:52.048 Duplicate Returned Information: 0 00:18:52.048 Explicit Persistent Connection Support for Discovery: 0 00:18:52.048 Transport Requirements: 00:18:52.048 Secure Channel: Not Specified 00:18:52.049 Port ID: 1 (0x0001) 00:18:52.049 Controller ID: 65535 (0xffff) 00:18:52.049 Admin Max SQ Size: 32 00:18:52.049 Transport Service Identifier: 4420 00:18:52.049 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:52.049 Transport Address: 192.168.100.8 00:18:52.049 Transport Specific Address Subtype - RDMA 00:18:52.049 RDMA QP Service Type: 1 (Reliable Connected) 00:18:52.049 RDMA Provider Type: 1 (No provider specified) 00:18:52.049 RDMA CM Service: 1 (RDMA_CM) 00:18:52.049 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:52.049 get_feature(0x01) failed 00:18:52.049 get_feature(0x02) failed 00:18:52.049 get_feature(0x04) failed 00:18:52.049 ===================================================== 00:18:52.049 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:18:52.049 ===================================================== 00:18:52.049 Controller Capabilities/Features 00:18:52.049 ================================ 00:18:52.049 Vendor ID: 0000 00:18:52.049 Subsystem Vendor ID: 0000 00:18:52.049 Serial Number: ddbddc90f98ee2d483ff 00:18:52.049 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:52.049 Firmware Version: 6.8.9-20 00:18:52.049 Recommended Arb Burst: 6 00:18:52.049 IEEE OUI Identifier: 00 00 00 00:18:52.049 Multi-path I/O 00:18:52.049 May have multiple subsystem ports: Yes 00:18:52.049 May have multiple controllers: Yes 00:18:52.049 Associated with SR-IOV VF: No 00:18:52.049 Max Data Transfer Size: 1048576 00:18:52.049 Max Number of Namespaces: 1024 00:18:52.049 Max Number of I/O Queues: 128 00:18:52.049 NVMe Specification Version (VS): 1.3 00:18:52.049 NVMe Specification Version (Identify): 1.3 00:18:52.049 Maximum Queue Entries: 128 00:18:52.049 Contiguous Queues Required: No 00:18:52.049 Arbitration Mechanisms Supported 00:18:52.049 Weighted Round Robin: Not Supported 00:18:52.049 Vendor Specific: Not Supported 00:18:52.049 Reset Timeout: 7500 ms 00:18:52.049 Doorbell Stride: 4 bytes 00:18:52.049 NVM Subsystem Reset: Not Supported 00:18:52.049 Command Sets Supported 00:18:52.049 NVM Command Set: Supported 00:18:52.049 Boot Partition: Not Supported 00:18:52.049 Memory Page Size Minimum: 4096 bytes 00:18:52.049 Memory Page Size Maximum: 4096 bytes 00:18:52.049 Persistent Memory Region: Not Supported 00:18:52.049 Optional Asynchronous Events Supported 00:18:52.049 Namespace Attribute Notices: Supported 00:18:52.049 Firmware Activation Notices: Not Supported 00:18:52.049 ANA Change Notices: Supported 00:18:52.049 PLE Aggregate Log Change Notices: Not Supported 00:18:52.049 LBA Status Info Alert Notices: Not Supported 00:18:52.049 EGE Aggregate Log Change Notices: Not Supported 00:18:52.049 Normal NVM Subsystem Shutdown event: Not Supported 00:18:52.049 Zone Descriptor Change Notices: Not Supported 00:18:52.049 Discovery Log Change Notices: Not Supported 00:18:52.049 Controller Attributes 00:18:52.049 128-bit Host Identifier: Supported 00:18:52.049 Non-Operational Permissive Mode: Not Supported 00:18:52.049 NVM Sets: Not Supported 00:18:52.049 Read Recovery Levels: Not Supported 00:18:52.049 Endurance Groups: Not Supported 00:18:52.049 Predictable Latency Mode: Not Supported 00:18:52.049 Traffic Based Keep ALive: Supported 00:18:52.049 Namespace Granularity: Not Supported 00:18:52.049 SQ Associations: Not Supported 00:18:52.049 UUID List: Not Supported 00:18:52.049 Multi-Domain Subsystem: Not Supported 00:18:52.049 Fixed Capacity Management: Not Supported 00:18:52.049 Variable Capacity Management: Not Supported 00:18:52.049 Delete Endurance Group: Not Supported 00:18:52.049 Delete NVM Set: Not Supported 00:18:52.049 Extended LBA Formats Supported: Not Supported 00:18:52.049 Flexible Data Placement Supported: Not Supported 00:18:52.049 00:18:52.049 Controller Memory Buffer Support 00:18:52.049 ================================ 00:18:52.049 Supported: No 00:18:52.049 00:18:52.049 Persistent Memory Region Support 00:18:52.049 ================================ 00:18:52.049 Supported: No 00:18:52.049 00:18:52.049 Admin Command Set Attributes 00:18:52.049 ============================ 00:18:52.049 Security Send/Receive: Not Supported 00:18:52.049 Format NVM: Not Supported 00:18:52.049 Firmware Activate/Download: Not Supported 00:18:52.049 Namespace Management: Not Supported 00:18:52.049 Device Self-Test: Not Supported 00:18:52.049 Directives: Not Supported 00:18:52.049 NVMe-MI: Not Supported 00:18:52.049 Virtualization Management: Not Supported 00:18:52.049 Doorbell Buffer Config: Not Supported 00:18:52.049 Get LBA Status Capability: Not Supported 00:18:52.049 Command & Feature Lockdown Capability: Not Supported 00:18:52.049 Abort Command Limit: 4 00:18:52.049 Async Event Request Limit: 4 00:18:52.049 Number of Firmware Slots: N/A 00:18:52.049 Firmware Slot 1 Read-Only: N/A 00:18:52.049 Firmware Activation Without Reset: N/A 00:18:52.049 Multiple Update Detection Support: N/A 00:18:52.049 Firmware Update Granularity: No Information Provided 00:18:52.049 Per-Namespace SMART Log: Yes 00:18:52.049 Asymmetric Namespace Access Log Page: Supported 00:18:52.049 ANA Transition Time : 10 sec 00:18:52.049 00:18:52.049 Asymmetric Namespace Access Capabilities 00:18:52.049 ANA Optimized State : Supported 00:18:52.049 ANA Non-Optimized State : Supported 00:18:52.049 ANA Inaccessible State : Supported 00:18:52.049 ANA Persistent Loss State : Supported 00:18:52.049 ANA Change State : Supported 00:18:52.049 ANAGRPID is not changed : No 00:18:52.049 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:52.049 00:18:52.049 ANA Group Identifier Maximum : 128 00:18:52.049 Number of ANA Group Identifiers : 128 00:18:52.049 Max Number of Allowed Namespaces : 1024 00:18:52.049 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:52.049 Command Effects Log Page: Supported 00:18:52.049 Get Log Page Extended Data: Supported 00:18:52.049 Telemetry Log Pages: Not Supported 00:18:52.049 Persistent Event Log Pages: Not Supported 00:18:52.049 Supported Log Pages Log Page: May Support 00:18:52.049 Commands Supported & Effects Log Page: Not Supported 00:18:52.049 Feature Identifiers & Effects Log Page:May Support 00:18:52.049 NVMe-MI Commands & Effects Log Page: May Support 00:18:52.049 Data Area 4 for Telemetry Log: Not Supported 00:18:52.049 Error Log Page Entries Supported: 128 00:18:52.049 Keep Alive: Supported 00:18:52.049 Keep Alive Granularity: 1000 ms 00:18:52.049 00:18:52.049 NVM Command Set Attributes 00:18:52.049 ========================== 00:18:52.049 Submission Queue Entry Size 00:18:52.049 Max: 64 00:18:52.049 Min: 64 00:18:52.049 Completion Queue Entry Size 00:18:52.049 Max: 16 00:18:52.049 Min: 16 00:18:52.049 Number of Namespaces: 1024 00:18:52.049 Compare Command: Not Supported 00:18:52.049 Write Uncorrectable Command: Not Supported 00:18:52.049 Dataset Management Command: Supported 00:18:52.049 Write Zeroes Command: Supported 00:18:52.049 Set Features Save Field: Not Supported 00:18:52.049 Reservations: Not Supported 00:18:52.049 Timestamp: Not Supported 00:18:52.049 Copy: Not Supported 00:18:52.049 Volatile Write Cache: Present 00:18:52.049 Atomic Write Unit (Normal): 1 00:18:52.049 Atomic Write Unit (PFail): 1 00:18:52.049 Atomic Compare & Write Unit: 1 00:18:52.049 Fused Compare & Write: Not Supported 00:18:52.049 Scatter-Gather List 00:18:52.049 SGL Command Set: Supported 00:18:52.049 SGL Keyed: Supported 00:18:52.049 SGL Bit Bucket Descriptor: Not Supported 00:18:52.049 SGL Metadata Pointer: Not Supported 00:18:52.049 Oversized SGL: Not Supported 00:18:52.050 SGL Metadata Address: Not Supported 00:18:52.050 SGL Offset: Supported 00:18:52.050 Transport SGL Data Block: Not Supported 00:18:52.050 Replay Protected Memory Block: Not Supported 00:18:52.050 00:18:52.050 Firmware Slot Information 00:18:52.050 ========================= 00:18:52.050 Active slot: 0 00:18:52.050 00:18:52.050 Asymmetric Namespace Access 00:18:52.050 =========================== 00:18:52.050 Change Count : 0 00:18:52.050 Number of ANA Group Descriptors : 1 00:18:52.050 ANA Group Descriptor : 0 00:18:52.050 ANA Group ID : 1 00:18:52.050 Number of NSID Values : 1 00:18:52.050 Change Count : 0 00:18:52.050 ANA State : 1 00:18:52.050 Namespace Identifier : 1 00:18:52.050 00:18:52.050 Commands Supported and Effects 00:18:52.050 ============================== 00:18:52.050 Admin Commands 00:18:52.050 -------------- 00:18:52.050 Get Log Page (02h): Supported 00:18:52.050 Identify (06h): Supported 00:18:52.050 Abort (08h): Supported 00:18:52.050 Set Features (09h): Supported 00:18:52.050 Get Features (0Ah): Supported 00:18:52.050 Asynchronous Event Request (0Ch): Supported 00:18:52.050 Keep Alive (18h): Supported 00:18:52.050 I/O Commands 00:18:52.050 ------------ 00:18:52.050 Flush (00h): Supported 00:18:52.050 Write (01h): Supported LBA-Change 00:18:52.050 Read (02h): Supported 00:18:52.050 Write Zeroes (08h): Supported LBA-Change 00:18:52.050 Dataset Management (09h): Supported 00:18:52.050 00:18:52.050 Error Log 00:18:52.050 ========= 00:18:52.050 Entry: 0 00:18:52.050 Error Count: 0x3 00:18:52.050 Submission Queue Id: 0x0 00:18:52.050 Command Id: 0x5 00:18:52.050 Phase Bit: 0 00:18:52.050 Status Code: 0x2 00:18:52.050 Status Code Type: 0x0 00:18:52.050 Do Not Retry: 1 00:18:52.050 Error Location: 0x28 00:18:52.050 LBA: 0x0 00:18:52.050 Namespace: 0x0 00:18:52.050 Vendor Log Page: 0x0 00:18:52.050 ----------- 00:18:52.050 Entry: 1 00:18:52.050 Error Count: 0x2 00:18:52.050 Submission Queue Id: 0x0 00:18:52.050 Command Id: 0x5 00:18:52.050 Phase Bit: 0 00:18:52.050 Status Code: 0x2 00:18:52.050 Status Code Type: 0x0 00:18:52.050 Do Not Retry: 1 00:18:52.050 Error Location: 0x28 00:18:52.050 LBA: 0x0 00:18:52.050 Namespace: 0x0 00:18:52.050 Vendor Log Page: 0x0 00:18:52.050 ----------- 00:18:52.050 Entry: 2 00:18:52.050 Error Count: 0x1 00:18:52.050 Submission Queue Id: 0x0 00:18:52.050 Command Id: 0x0 00:18:52.050 Phase Bit: 0 00:18:52.050 Status Code: 0x2 00:18:52.050 Status Code Type: 0x0 00:18:52.050 Do Not Retry: 1 00:18:52.050 Error Location: 0x28 00:18:52.050 LBA: 0x0 00:18:52.050 Namespace: 0x0 00:18:52.050 Vendor Log Page: 0x0 00:18:52.050 00:18:52.050 Number of Queues 00:18:52.050 ================ 00:18:52.050 Number of I/O Submission Queues: 128 00:18:52.050 Number of I/O Completion Queues: 128 00:18:52.050 00:18:52.050 ZNS Specific Controller Data 00:18:52.050 ============================ 00:18:52.050 Zone Append Size Limit: 0 00:18:52.050 00:18:52.050 00:18:52.050 Active Namespaces 00:18:52.050 ================= 00:18:52.050 get_feature(0x05) failed 00:18:52.050 Namespace ID:1 00:18:52.050 Command Set Identifier: NVM (00h) 00:18:52.050 Deallocate: Supported 00:18:52.050 Deallocated/Unwritten Error: Not Supported 00:18:52.050 Deallocated Read Value: Unknown 00:18:52.050 Deallocate in Write Zeroes: Not Supported 00:18:52.050 Deallocated Guard Field: 0xFFFF 00:18:52.050 Flush: Supported 00:18:52.050 Reservation: Not Supported 00:18:52.050 Namespace Sharing Capabilities: Multiple Controllers 00:18:52.050 Size (in LBAs): 7814037168 (3726GiB) 00:18:52.050 Capacity (in LBAs): 7814037168 (3726GiB) 00:18:52.050 Utilization (in LBAs): 7814037168 (3726GiB) 00:18:52.050 UUID: 2707f9e4-ce27-4589-96fe-204b92ab85ca 00:18:52.050 Thin Provisioning: Not Supported 00:18:52.050 Per-NS Atomic Units: Yes 00:18:52.050 Atomic Boundary Size (Normal): 0 00:18:52.050 Atomic Boundary Size (PFail): 0 00:18:52.050 Atomic Boundary Offset: 0 00:18:52.050 NGUID/EUI64 Never Reused: No 00:18:52.050 ANA group ID: 1 00:18:52.050 Namespace Write Protected: No 00:18:52.050 Number of LBA Formats: 1 00:18:52.050 Current LBA Format: LBA Format #00 00:18:52.050 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:52.050 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:52.050 rmmod nvme_rdma 00:18:52.050 rmmod nvme_fabrics 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:52.050 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:18:52.310 16:31:46 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:18:54.844 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:18:54.844 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:18:58.130 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:18:59.506 00:18:59.506 real 0m19.408s 00:18:59.506 user 0m4.800s 00:18:59.506 sys 0m10.177s 00:18:59.506 16:31:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.506 16:31:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.506 ************************************ 00:18:59.506 END TEST nvmf_identify_kernel_target 00:18:59.506 ************************************ 00:18:59.506 16:31:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:18:59.506 16:31:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:59.506 16:31:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.506 16:31:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.765 ************************************ 00:18:59.765 START TEST nvmf_auth_host 00:18:59.765 ************************************ 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:18:59.766 * Looking for test storage... 00:18:59.766 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:59.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.766 --rc genhtml_branch_coverage=1 00:18:59.766 --rc genhtml_function_coverage=1 00:18:59.766 --rc genhtml_legend=1 00:18:59.766 --rc geninfo_all_blocks=1 00:18:59.766 --rc geninfo_unexecuted_blocks=1 00:18:59.766 00:18:59.766 ' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:59.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.766 --rc genhtml_branch_coverage=1 00:18:59.766 --rc genhtml_function_coverage=1 00:18:59.766 --rc genhtml_legend=1 00:18:59.766 --rc geninfo_all_blocks=1 00:18:59.766 --rc geninfo_unexecuted_blocks=1 00:18:59.766 00:18:59.766 ' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:59.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.766 --rc genhtml_branch_coverage=1 00:18:59.766 --rc genhtml_function_coverage=1 00:18:59.766 --rc genhtml_legend=1 00:18:59.766 --rc geninfo_all_blocks=1 00:18:59.766 --rc geninfo_unexecuted_blocks=1 00:18:59.766 00:18:59.766 ' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:59.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.766 --rc genhtml_branch_coverage=1 00:18:59.766 --rc genhtml_function_coverage=1 00:18:59.766 --rc genhtml_legend=1 00:18:59.766 --rc geninfo_all_blocks=1 00:18:59.766 --rc geninfo_unexecuted_blocks=1 00:18:59.766 00:18:59.766 ' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.766 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.766 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:18:59.767 16:31:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.032 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:05.032 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:19:05.032 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:05.032 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:05.033 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:05.033 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:05.033 Found net devices under 0000:18:00.0: mlx_0_0 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:05.033 Found net devices under 0000:18:00.1: mlx_0_1 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:19:05.033 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:05.293 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:05.293 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:05.293 altname enp24s0f0np0 00:19:05.293 altname ens785f0np0 00:19:05.293 inet 192.168.100.8/24 scope global mlx_0_0 00:19:05.293 valid_lft forever preferred_lft forever 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:05.293 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:05.293 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:05.293 altname enp24s0f1np1 00:19:05.293 altname ens785f1np1 00:19:05.293 inet 192.168.100.9/24 scope global mlx_0_1 00:19:05.293 valid_lft forever preferred_lft forever 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:05.293 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:05.294 192.168.100.9' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:05.294 192.168.100.9' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:05.294 192.168.100.9' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3861790 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3861790 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3861790 ']' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.294 16:31:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=39c90efdf5cdd6e50156a53b22f7bde8 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AIj 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 39c90efdf5cdd6e50156a53b22f7bde8 0 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 39c90efdf5cdd6e50156a53b22f7bde8 0 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=39c90efdf5cdd6e50156a53b22f7bde8 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:05.558 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AIj 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AIj 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.AIj 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3b602388a809c6d073f1bddbb98a597de156816e8a80cdf3ef977c5dd0762ae0 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gEX 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3b602388a809c6d073f1bddbb98a597de156816e8a80cdf3ef977c5dd0762ae0 3 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3b602388a809c6d073f1bddbb98a597de156816e8a80cdf3ef977c5dd0762ae0 3 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3b602388a809c6d073f1bddbb98a597de156816e8a80cdf3ef977c5dd0762ae0 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gEX 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gEX 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gEX 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eb1c2dfa99a53ba8a1bca320b73b0ca923aacb827fa896b1 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DdI 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eb1c2dfa99a53ba8a1bca320b73b0ca923aacb827fa896b1 0 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eb1c2dfa99a53ba8a1bca320b73b0ca923aacb827fa896b1 0 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eb1c2dfa99a53ba8a1bca320b73b0ca923aacb827fa896b1 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DdI 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DdI 00:19:05.817 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.DdI 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=23f3f36af31ddd3f8717dec6cefffde8bdaaf1c018a58f67 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hnK 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 23f3f36af31ddd3f8717dec6cefffde8bdaaf1c018a58f67 2 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 23f3f36af31ddd3f8717dec6cefffde8bdaaf1c018a58f67 2 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=23f3f36af31ddd3f8717dec6cefffde8bdaaf1c018a58f67 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hnK 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hnK 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.hnK 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=93ee09691ac634d40759049e5dddb905 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9E4 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 93ee09691ac634d40759049e5dddb905 1 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 93ee09691ac634d40759049e5dddb905 1 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=93ee09691ac634d40759049e5dddb905 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9E4 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9E4 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.9E4 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2d0abee609b0de6a9557926dae8b9082 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IIG 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2d0abee609b0de6a9557926dae8b9082 1 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2d0abee609b0de6a9557926dae8b9082 1 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2d0abee609b0de6a9557926dae8b9082 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:05.818 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IIG 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IIG 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.IIG 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=367a762e6fcf51e96e3f5bc327597063f662edb56c3cccff 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cqq 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 367a762e6fcf51e96e3f5bc327597063f662edb56c3cccff 2 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 367a762e6fcf51e96e3f5bc327597063f662edb56c3cccff 2 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=367a762e6fcf51e96e3f5bc327597063f662edb56c3cccff 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cqq 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cqq 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.cqq 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=98b1e240d628f3ec8273a766e5986aa1 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tJi 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 98b1e240d628f3ec8273a766e5986aa1 0 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 98b1e240d628f3ec8273a766e5986aa1 0 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=98b1e240d628f3ec8273a766e5986aa1 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tJi 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tJi 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.tJi 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:06.076 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=044eed88f32cb84bdd7e199aad4eebf21e6f1e15ab9c761e31fc4f3f5b6fe940 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.clT 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 044eed88f32cb84bdd7e199aad4eebf21e6f1e15ab9c761e31fc4f3f5b6fe940 3 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 044eed88f32cb84bdd7e199aad4eebf21e6f1e15ab9c761e31fc4f3f5b6fe940 3 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=044eed88f32cb84bdd7e199aad4eebf21e6f1e15ab9c761e31fc4f3f5b6fe940 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.clT 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.clT 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.clT 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3861790 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3861790 ']' 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.077 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.333 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.333 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:06.333 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:06.333 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AIj 00:19:06.333 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.333 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gEX ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gEX 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.DdI 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.hnK ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hnK 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.9E4 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.IIG ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IIG 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.cqq 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.tJi ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.tJi 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.clT 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.334 16:32:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:06.334 16:32:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:19:09.609 Waiting for block devices as requested 00:19:09.609 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:09.609 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:09.609 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:09.609 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:09.609 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:09.609 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:09.609 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:09.609 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:09.609 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:09.867 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:09.867 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:09.867 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:10.124 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:10.124 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:10.124 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:10.383 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:10.383 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:12.280 No valid GPT data, bailing 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:12.280 16:32:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:12.280 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:12.280 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:12.280 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:19:12.280 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:12.280 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:19:12.280 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:19:12.280 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:12.280 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:12.280 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:19:12.540 00:19:12.540 Discovery Log Number of Records 2, Generation counter 2 00:19:12.540 =====Discovery Log Entry 0====== 00:19:12.540 trtype: rdma 00:19:12.540 adrfam: ipv4 00:19:12.540 subtype: current discovery subsystem 00:19:12.540 treq: not specified, sq flow control disable supported 00:19:12.540 portid: 1 00:19:12.540 trsvcid: 4420 00:19:12.540 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:12.540 traddr: 192.168.100.8 00:19:12.540 eflags: none 00:19:12.540 rdma_prtype: not specified 00:19:12.540 rdma_qptype: connected 00:19:12.540 rdma_cms: rdma-cm 00:19:12.540 rdma_pkey: 0x0000 00:19:12.540 =====Discovery Log Entry 1====== 00:19:12.540 trtype: rdma 00:19:12.540 adrfam: ipv4 00:19:12.540 subtype: nvme subsystem 00:19:12.540 treq: not specified, sq flow control disable supported 00:19:12.540 portid: 1 00:19:12.540 trsvcid: 4420 00:19:12.540 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:12.540 traddr: 192.168.100.8 00:19:12.540 eflags: none 00:19:12.540 rdma_prtype: not specified 00:19:12.540 rdma_qptype: connected 00:19:12.540 rdma_cms: rdma-cm 00:19:12.540 rdma_pkey: 0x0000 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:12.540 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.541 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.800 nvme0n1 00:19:12.800 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.801 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.060 nvme0n1 00:19:13.060 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.060 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.060 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.060 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.060 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.060 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.060 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.060 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.061 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.320 nvme0n1 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.320 16:32:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.320 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.321 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.579 nvme0n1 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:13.579 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.580 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.838 nvme0n1 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:13.838 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:13.839 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:13.839 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:13.839 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.839 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.098 nvme0n1 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.098 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.379 nvme0n1 00:19:14.379 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.379 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.379 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.379 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.379 16:32:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.379 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.638 nvme0n1 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:14.638 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.639 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.898 nvme0n1 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.898 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 nvme0n1 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.158 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.416 16:32:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.416 nvme0n1 00:19:15.416 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.416 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.416 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.417 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.417 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.417 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.417 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.417 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.417 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.417 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.674 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.674 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.674 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.674 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:15.674 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.675 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.933 nvme0n1 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.933 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.191 nvme0n1 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:16.191 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.192 16:32:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.449 nvme0n1 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.449 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.706 nvme0n1 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.706 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.982 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.241 nvme0n1 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.241 16:32:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.500 nvme0n1 00:19:17.500 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.500 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.500 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.500 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.500 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.500 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.500 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.500 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.500 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.500 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:17.758 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:17.759 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.759 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.759 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:17.759 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:17.759 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:17.759 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:17.759 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:17.759 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.759 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.759 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.017 nvme0n1 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:18.017 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:18.018 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:18.018 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:18.018 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.018 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.018 16:32:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 nvme0n1 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.583 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.841 nvme0n1 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.841 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:19.104 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:19.105 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:19.105 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.105 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.362 nvme0n1 00:19:19.362 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.362 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.362 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.362 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.362 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.363 16:32:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.363 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.926 nvme0n1 00:19:19.926 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.926 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.926 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.926 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.926 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.926 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.926 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.927 16:32:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.525 nvme0n1 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.525 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.146 nvme0n1 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:21.146 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.147 16:32:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.710 nvme0n1 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:21.710 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.967 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.530 nvme0n1 00:19:22.531 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.531 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.531 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.531 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.531 16:32:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.531 nvme0n1 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.531 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.789 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.047 nvme0n1 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.047 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.305 nvme0n1 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:23.305 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.306 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.306 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:23.306 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:23.306 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:23.306 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:23.306 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:23.306 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:23.306 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.306 16:32:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.564 nvme0n1 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.564 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.822 nvme0n1 00:19:23.822 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.822 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.823 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.081 nvme0n1 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.081 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.082 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.340 nvme0n1 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.340 16:32:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.598 nvme0n1 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.598 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.856 nvme0n1 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:24.856 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.857 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.115 nvme0n1 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.116 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.374 nvme0n1 00:19:25.374 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.374 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.374 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.374 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.374 16:32:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:25.374 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.375 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.632 nvme0n1 00:19:25.632 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.632 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.632 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.632 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.632 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.632 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.890 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.148 nvme0n1 00:19:26.148 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.148 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.148 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.149 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.407 nvme0n1 00:19:26.407 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.407 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.407 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.407 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.407 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.407 16:32:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.407 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.665 nvme0n1 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.666 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.231 nvme0n1 00:19:27.231 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.231 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.231 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.231 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.232 16:32:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.490 nvme0n1 00:19:27.490 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.490 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.490 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.490 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.490 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.490 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.490 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.490 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.490 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.490 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.748 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.749 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.008 nvme0n1 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.008 16:32:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.575 nvme0n1 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.575 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:28.576 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:28.576 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:28.576 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:28.576 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:28.576 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:28.576 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.576 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.834 nvme0n1 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.834 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.091 16:32:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.656 nvme0n1 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.656 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.230 nvme0n1 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.230 16:32:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.808 nvme0n1 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:30.808 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.809 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.372 nvme0n1 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:31.373 16:32:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.373 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.937 nvme0n1 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.937 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.193 nvme0n1 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:32.193 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.194 16:32:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.450 nvme0n1 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.450 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.707 nvme0n1 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:32.707 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.708 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.965 nvme0n1 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.965 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.222 nvme0n1 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.222 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.223 16:32:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.479 nvme0n1 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:33.479 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.480 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.736 nvme0n1 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:33.736 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.737 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.993 nvme0n1 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.993 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.250 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.250 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.250 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.251 nvme0n1 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.251 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.508 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.508 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.508 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.508 16:32:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:34.508 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.509 nvme0n1 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.509 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.767 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.025 nvme0n1 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.025 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.283 nvme0n1 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.283 16:32:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.542 nvme0n1 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:35.542 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.543 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.800 nvme0n1 00:19:35.800 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.800 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.800 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.800 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.800 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.800 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.058 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.059 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.316 nvme0n1 00:19:36.316 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.316 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.316 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.316 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.316 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.316 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.316 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 16:32:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.574 nvme0n1 00:19:36.574 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.574 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.574 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.574 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.574 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.574 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.832 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.091 nvme0n1 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:37.091 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:37.092 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.092 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.092 16:32:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.658 nvme0n1 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.658 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.916 nvme0n1 00:19:37.916 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.916 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.916 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.916 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.916 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.916 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.916 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.916 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.916 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.916 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.174 16:32:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.432 nvme0n1 00:19:38.432 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.432 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.432 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.432 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.432 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.432 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.432 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.432 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.432 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzljOTBlZmRmNWNkZDZlNTAxNTZhNTNiMjJmN2JkZTj2qgjO: 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: ]] 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2I2MDIzODhhODA5YzZkMDczZjFiZGRiYjk4YTU5N2RlMTU2ODE2ZThhODBjZGYzZWY5NzdjNWRkMDc2MmFlMKjzAo0=: 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.433 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.998 nvme0n1 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.999 16:32:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.564 nvme0n1 00:19:39.564 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.564 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.564 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.564 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.564 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.564 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.564 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.564 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.564 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.564 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.823 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.388 nvme0n1 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.388 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY3YTc2MmU2ZmNmNTFlOTZlM2Y1YmMzMjc1OTcwNjNmNjYyZWRiNTZjM2NjY2ZmQXJymg==: 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: ]] 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OThiMWUyNDBkNjI4ZjNlYzgyNzNhNzY2ZTU5ODZhYTHTMa1K: 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.389 16:32:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.954 nvme0n1 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ0ZWVkODhmMzJjYjg0YmRkN2UxOTlhYWQ0ZWViZjIxZTZmMWUxNWFiOWM3NjFlMzFmYzRmM2Y1YjZmZTk0MPUAUEA=: 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.954 16:32:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.518 nvme0n1 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.518 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.519 request: 00:19:41.519 { 00:19:41.519 "name": "nvme0", 00:19:41.519 "trtype": "rdma", 00:19:41.519 "traddr": "192.168.100.8", 00:19:41.519 "adrfam": "ipv4", 00:19:41.519 "trsvcid": "4420", 00:19:41.519 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:41.519 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:41.519 "prchk_reftag": false, 00:19:41.519 "prchk_guard": false, 00:19:41.519 "hdgst": false, 00:19:41.519 "ddgst": false, 00:19:41.519 "allow_unrecognized_csi": false, 00:19:41.519 "method": "bdev_nvme_attach_controller", 00:19:41.519 "req_id": 1 00:19:41.519 } 00:19:41.519 Got JSON-RPC error response 00:19:41.519 response: 00:19:41.519 { 00:19:41.519 "code": -5, 00:19:41.519 "message": "Input/output error" 00:19:41.519 } 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.519 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.776 request: 00:19:41.776 { 00:19:41.777 "name": "nvme0", 00:19:41.777 "trtype": "rdma", 00:19:41.777 "traddr": "192.168.100.8", 00:19:41.777 "adrfam": "ipv4", 00:19:41.777 "trsvcid": "4420", 00:19:41.777 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:41.777 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:41.777 "prchk_reftag": false, 00:19:41.777 "prchk_guard": false, 00:19:41.777 "hdgst": false, 00:19:41.777 "ddgst": false, 00:19:41.777 "dhchap_key": "key2", 00:19:41.777 "allow_unrecognized_csi": false, 00:19:41.777 "method": "bdev_nvme_attach_controller", 00:19:41.777 "req_id": 1 00:19:41.777 } 00:19:41.777 Got JSON-RPC error response 00:19:41.777 response: 00:19:41.777 { 00:19:41.777 "code": -5, 00:19:41.777 "message": "Input/output error" 00:19:41.777 } 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.777 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.035 request: 00:19:42.035 { 00:19:42.035 "name": "nvme0", 00:19:42.035 "trtype": "rdma", 00:19:42.035 "traddr": "192.168.100.8", 00:19:42.035 "adrfam": "ipv4", 00:19:42.035 "trsvcid": "4420", 00:19:42.035 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:42.035 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:42.035 "prchk_reftag": false, 00:19:42.035 "prchk_guard": false, 00:19:42.035 "hdgst": false, 00:19:42.035 "ddgst": false, 00:19:42.035 "dhchap_key": "key1", 00:19:42.035 "dhchap_ctrlr_key": "ckey2", 00:19:42.035 "allow_unrecognized_csi": false, 00:19:42.035 "method": "bdev_nvme_attach_controller", 00:19:42.035 "req_id": 1 00:19:42.035 } 00:19:42.035 Got JSON-RPC error response 00:19:42.035 response: 00:19:42.035 { 00:19:42.035 "code": -5, 00:19:42.035 "message": "Input/output error" 00:19:42.035 } 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.035 nvme0n1 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.035 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.292 request: 00:19:42.292 { 00:19:42.292 "name": "nvme0", 00:19:42.292 "dhchap_key": "key1", 00:19:42.292 "dhchap_ctrlr_key": "ckey2", 00:19:42.292 "method": "bdev_nvme_set_keys", 00:19:42.292 "req_id": 1 00:19:42.292 } 00:19:42.292 Got JSON-RPC error response 00:19:42.292 response: 00:19:42.292 { 00:19:42.292 "code": -13, 00:19:42.292 "message": "Permission denied" 00:19:42.292 } 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:19:42.292 16:32:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:19:43.222 16:32:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.222 16:32:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:43.222 16:32:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.222 16:32:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.222 16:32:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.222 16:32:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:19:43.222 16:32:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:19:44.151 16:32:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:44.151 16:32:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.151 16:32:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.151 16:32:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.151 16:32:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.408 16:32:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:19:44.408 16:32:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIxYzJkZmE5OWE1M2JhOGExYmNhMzIwYjczYjBjYTkyM2FhY2I4MjdmYTg5NmIx5xkPhw==: 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: ]] 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjNmM2YzNmFmMzFkZGQzZjg3MTdkZWM2Y2VmZmZkZThiZGFhZjFjMDE4YTU4ZjY3BcrKeA==: 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.361 16:32:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.619 nvme0n1 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTNlZTA5NjkxYWM2MzRkNDA3NTkwNDllNWRkZGI5MDUhtrD1: 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: ]] 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmQwYWJlZTYwOWIwZGU2YTk1NTc5MjZkYWU4YjkwODKbE7Gc: 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.619 request: 00:19:45.619 { 00:19:45.619 "name": "nvme0", 00:19:45.619 "dhchap_key": "key2", 00:19:45.619 "dhchap_ctrlr_key": "ckey1", 00:19:45.619 "method": "bdev_nvme_set_keys", 00:19:45.619 "req_id": 1 00:19:45.619 } 00:19:45.619 Got JSON-RPC error response 00:19:45.619 response: 00:19:45.619 { 00:19:45.619 "code": -13, 00:19:45.619 "message": "Permission denied" 00:19:45.619 } 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:19:45.619 16:32:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:19:46.553 16:32:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.553 16:32:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:46.553 16:32:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.553 16:32:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.553 16:32:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.553 16:32:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:19:46.553 16:32:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:19:47.925 16:32:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.925 16:32:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:47.925 16:32:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.925 16:32:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.925 16:32:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.925 16:32:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:19:47.926 16:32:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:48.859 rmmod nvme_rdma 00:19:48.859 rmmod nvme_fabrics 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3861790 ']' 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3861790 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3861790 ']' 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3861790 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3861790 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3861790' 00:19:48.859 killing process with pid 3861790 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3861790 00:19:48.859 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3861790 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:19:49.119 16:32:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:19:51.649 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:51.649 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:54.932 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:19:56.305 16:32:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.AIj /tmp/spdk.key-null.DdI /tmp/spdk.key-sha256.9E4 /tmp/spdk.key-sha384.cqq /tmp/spdk.key-sha512.clT /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:19:56.305 16:32:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:19:58.836 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:19:58.836 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:00.208 00:20:00.208 real 1m0.550s 00:20:00.208 user 0m48.293s 00:20:00.208 sys 0m14.845s 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.208 ************************************ 00:20:00.208 END TEST nvmf_auth_host 00:20:00.208 ************************************ 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.208 ************************************ 00:20:00.208 START TEST nvmf_bdevperf 00:20:00.208 ************************************ 00:20:00.208 16:32:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:00.208 * Looking for test storage... 00:20:00.466 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:00.466 16:32:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:00.466 16:32:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:00.466 16:32:54 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:00.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.466 --rc genhtml_branch_coverage=1 00:20:00.466 --rc genhtml_function_coverage=1 00:20:00.466 --rc genhtml_legend=1 00:20:00.466 --rc geninfo_all_blocks=1 00:20:00.466 --rc geninfo_unexecuted_blocks=1 00:20:00.466 00:20:00.466 ' 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:00.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.466 --rc genhtml_branch_coverage=1 00:20:00.466 --rc genhtml_function_coverage=1 00:20:00.466 --rc genhtml_legend=1 00:20:00.466 --rc geninfo_all_blocks=1 00:20:00.466 --rc geninfo_unexecuted_blocks=1 00:20:00.466 00:20:00.466 ' 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:00.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.466 --rc genhtml_branch_coverage=1 00:20:00.466 --rc genhtml_function_coverage=1 00:20:00.466 --rc genhtml_legend=1 00:20:00.466 --rc geninfo_all_blocks=1 00:20:00.466 --rc geninfo_unexecuted_blocks=1 00:20:00.466 00:20:00.466 ' 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:00.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.466 --rc genhtml_branch_coverage=1 00:20:00.466 --rc genhtml_function_coverage=1 00:20:00.466 --rc genhtml_legend=1 00:20:00.466 --rc geninfo_all_blocks=1 00:20:00.466 --rc geninfo_unexecuted_blocks=1 00:20:00.466 00:20:00.466 ' 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.466 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:00.467 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:00.467 16:32:55 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:05.815 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:05.815 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:05.815 Found net devices under 0000:18:00.0: mlx_0_0 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:05.815 Found net devices under 0000:18:00.1: mlx_0_1 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:05.815 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:05.815 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:20:05.815 altname enp24s0f0np0 00:20:05.815 altname ens785f0np0 00:20:05.815 inet 192.168.100.8/24 scope global mlx_0_0 00:20:05.815 valid_lft forever preferred_lft forever 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:05.815 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:05.815 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:20:05.815 altname enp24s0f1np1 00:20:05.815 altname ens785f1np1 00:20:05.815 inet 192.168.100.9/24 scope global mlx_0_1 00:20:05.815 valid_lft forever preferred_lft forever 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:05.815 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:06.073 192.168.100.9' 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:06.073 192.168.100.9' 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:06.073 192.168.100.9' 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3877456 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3877456 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3877456 ']' 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:06.073 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:06.073 [2024-12-06 16:33:00.658334] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:20:06.073 [2024-12-06 16:33:00.658383] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.073 [2024-12-06 16:33:00.716738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:06.073 [2024-12-06 16:33:00.756085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.073 [2024-12-06 16:33:00.756122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.073 [2024-12-06 16:33:00.756131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.073 [2024-12-06 16:33:00.756138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.073 [2024-12-06 16:33:00.756145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.073 [2024-12-06 16:33:00.757464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.073 [2024-12-06 16:33:00.757537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.073 [2024-12-06 16:33:00.757540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.331 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.331 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:06.331 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.331 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.331 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:06.331 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.331 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:06.331 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.331 16:33:00 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:06.331 [2024-12-06 16:33:00.906922] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1668800/0x166ccf0) succeed. 00:20:06.331 [2024-12-06 16:33:00.914939] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1669df0/0x16ae390) succeed. 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:06.331 Malloc0 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.331 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:06.588 [2024-12-06 16:33:01.063317] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:06.588 { 00:20:06.588 "params": { 00:20:06.588 "name": "Nvme$subsystem", 00:20:06.588 "trtype": "$TEST_TRANSPORT", 00:20:06.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.588 "adrfam": "ipv4", 00:20:06.588 "trsvcid": "$NVMF_PORT", 00:20:06.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.588 "hdgst": ${hdgst:-false}, 00:20:06.588 "ddgst": ${ddgst:-false} 00:20:06.588 }, 00:20:06.588 "method": "bdev_nvme_attach_controller" 00:20:06.588 } 00:20:06.588 EOF 00:20:06.588 )") 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:20:06.588 16:33:01 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:06.588 "params": { 00:20:06.588 "name": "Nvme1", 00:20:06.588 "trtype": "rdma", 00:20:06.588 "traddr": "192.168.100.8", 00:20:06.588 "adrfam": "ipv4", 00:20:06.588 "trsvcid": "4420", 00:20:06.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.588 "hdgst": false, 00:20:06.588 "ddgst": false 00:20:06.588 }, 00:20:06.588 "method": "bdev_nvme_attach_controller" 00:20:06.588 }' 00:20:06.588 [2024-12-06 16:33:01.097297] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:20:06.588 [2024-12-06 16:33:01.097344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3877508 ] 00:20:06.588 [2024-12-06 16:33:01.154949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.589 [2024-12-06 16:33:01.193127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.846 Running I/O for 1 seconds... 00:20:07.778 19200.00 IOPS, 75.00 MiB/s 00:20:07.778 Latency(us) 00:20:07.778 [2024-12-06T15:33:02.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.778 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:07.778 Verification LBA range: start 0x0 length 0x4000 00:20:07.778 Nvme1n1 : 1.01 19235.70 75.14 0.00 0.00 6614.94 676.60 11116.85 00:20:07.778 [2024-12-06T15:33:02.506Z] =================================================================================================================== 00:20:07.778 [2024-12-06T15:33:02.506Z] Total : 19235.70 75.14 0.00 0.00 6614.94 676.60 11116.85 00:20:08.035 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3877830 00:20:08.035 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:20:08.035 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:20:08.035 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:20:08.035 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:20:08.035 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:20:08.036 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:08.036 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:08.036 { 00:20:08.036 "params": { 00:20:08.036 "name": "Nvme$subsystem", 00:20:08.036 "trtype": "$TEST_TRANSPORT", 00:20:08.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:08.036 "adrfam": "ipv4", 00:20:08.036 "trsvcid": "$NVMF_PORT", 00:20:08.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:08.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:08.036 "hdgst": ${hdgst:-false}, 00:20:08.036 "ddgst": ${ddgst:-false} 00:20:08.036 }, 00:20:08.036 "method": "bdev_nvme_attach_controller" 00:20:08.036 } 00:20:08.036 EOF 00:20:08.036 )") 00:20:08.036 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:20:08.036 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:20:08.036 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:20:08.036 16:33:02 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:08.036 "params": { 00:20:08.036 "name": "Nvme1", 00:20:08.036 "trtype": "rdma", 00:20:08.036 "traddr": "192.168.100.8", 00:20:08.036 "adrfam": "ipv4", 00:20:08.036 "trsvcid": "4420", 00:20:08.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:08.036 "hdgst": false, 00:20:08.036 "ddgst": false 00:20:08.036 }, 00:20:08.036 "method": "bdev_nvme_attach_controller" 00:20:08.036 }' 00:20:08.036 [2024-12-06 16:33:02.603607] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:20:08.036 [2024-12-06 16:33:02.603654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3877830 ] 00:20:08.036 [2024-12-06 16:33:02.661168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.036 [2024-12-06 16:33:02.695410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.293 Running I/O for 15 seconds... 00:20:10.157 19139.00 IOPS, 74.76 MiB/s [2024-12-06T15:33:05.818Z] 19235.00 IOPS, 75.14 MiB/s [2024-12-06T15:33:05.818Z] 16:33:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3877456 00:20:11.090 16:33:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:20:11.914 17280.00 IOPS, 67.50 MiB/s [2024-12-06T15:33:06.642Z] [2024-12-06 16:33:06.585723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.914 [2024-12-06 16:33:06.585761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.914 [2024-12-06 16:33:06.585780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.914 [2024-12-06 16:33:06.585786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.914 [2024-12-06 16:33:06.585796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.914 [2024-12-06 16:33:06.585803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.914 [2024-12-06 16:33:06.585810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.914 [2024-12-06 16:33:06.585816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.914 [2024-12-06 16:33:06.585824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.914 [2024-12-06 16:33:06.585830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.914 [2024-12-06 16:33:06.585837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.914 [2024-12-06 16:33:06.585843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.914 [2024-12-06 16:33:06.585850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.914 [2024-12-06 16:33:06.585856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.914 [2024-12-06 16:33:06.585863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.914 [2024-12-06 16:33:06.585869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.914 [2024-12-06 16:33:06.585876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.914 [2024-12-06 16:33:06.585882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.914 [2024-12-06 16:33:06.585889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.585895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.585902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.585908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.585915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.585921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.585933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.585939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.585947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.585952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.585959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.585965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.585973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.585978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.585986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.585991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.585999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.915 [2024-12-06 16:33:06.586485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.915 [2024-12-06 16:33:06.586491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.586988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.586995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.587000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.587007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.587013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.587020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.916 [2024-12-06 16:33:06.587026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.916 [2024-12-06 16:33:06.587033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.917 [2024-12-06 16:33:06.587383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x183100 00:20:11.917 [2024-12-06 16:33:06.587398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x183100 00:20:11.917 [2024-12-06 16:33:06.587413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x183100 00:20:11.917 [2024-12-06 16:33:06.587426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.587433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x183100 00:20:11.917 [2024-12-06 16:33:06.587439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:106f0000 sqhd:7210 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.589298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:11.917 [2024-12-06 16:33:06.589309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:11.917 [2024-12-06 16:33:06.589315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:8 PRP1 0x0 PRP2 0x0 00:20:11.917 [2024-12-06 16:33:06.589322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.917 [2024-12-06 16:33:06.591918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:11.917 [2024-12-06 16:33:06.605249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:11.917 [2024-12-06 16:33:06.608930] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:11.917 [2024-12-06 16:33:06.608952] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:11.917 [2024-12-06 16:33:06.608966] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:20:13.108 12960.00 IOPS, 50.62 MiB/s [2024-12-06T15:33:07.836Z] [2024-12-06 16:33:07.613009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:13.108 [2024-12-06 16:33:07.613061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:13.108 [2024-12-06 16:33:07.613659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:13.108 [2024-12-06 16:33:07.613688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:13.108 [2024-12-06 16:33:07.613710] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:20:13.108 [2024-12-06 16:33:07.613734] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:13.108 [2024-12-06 16:33:07.618347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:13.108 [2024-12-06 16:33:07.621143] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:13.108 [2024-12-06 16:33:07.621159] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:13.108 [2024-12-06 16:33:07.621165] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:20:13.934 10368.00 IOPS, 40.50 MiB/s [2024-12-06T15:33:08.662Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3877456 Killed "${NVMF_APP[@]}" "$@" 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3879388 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3879388 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3879388 ']' 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.934 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:13.934 [2024-12-06 16:33:08.623681] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:20:13.934 [2024-12-06 16:33:08.623728] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.934 [2024-12-06 16:33:08.625067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:13.934 [2024-12-06 16:33:08.625095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:13.934 [2024-12-06 16:33:08.625255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:13.934 [2024-12-06 16:33:08.625264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:13.934 [2024-12-06 16:33:08.625272] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:20:13.934 [2024-12-06 16:33:08.625283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:13.934 [2024-12-06 16:33:08.628659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:13.934 [2024-12-06 16:33:08.631132] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:13.934 [2024-12-06 16:33:08.631149] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:13.934 [2024-12-06 16:33:08.631156] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000170ed040 00:20:14.193 [2024-12-06 16:33:08.682265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.193 [2024-12-06 16:33:08.721727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.193 [2024-12-06 16:33:08.721763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.193 [2024-12-06 16:33:08.721772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.194 [2024-12-06 16:33:08.721779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.194 [2024-12-06 16:33:08.721788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.194 [2024-12-06 16:33:08.723033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.194 [2024-12-06 16:33:08.723113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.194 [2024-12-06 16:33:08.723116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.194 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.194 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:14.194 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.194 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.194 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:14.194 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.194 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:14.194 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.194 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:14.194 [2024-12-06 16:33:08.883458] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1290800/0x1294cf0) succeed. 00:20:14.194 8640.00 IOPS, 33.75 MiB/s [2024-12-06T15:33:08.922Z] [2024-12-06 16:33:08.891514] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1291df0/0x12d6390) succeed. 00:20:14.453 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.453 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.453 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.453 16:33:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:14.453 Malloc0 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:14.453 [2024-12-06 16:33:09.028052] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.453 16:33:09 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3877830 00:20:15.022 [2024-12-06 16:33:09.635217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:15.022 [2024-12-06 16:33:09.635244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:15.022 [2024-12-06 16:33:09.635413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:15.022 [2024-12-06 16:33:09.635425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:15.022 [2024-12-06 16:33:09.635431] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:20:15.022 [2024-12-06 16:33:09.635441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:15.022 [2024-12-06 16:33:09.640550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:15.022 [2024-12-06 16:33:09.678091] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:20:16.215 7970.29 IOPS, 31.13 MiB/s [2024-12-06T15:33:12.319Z] 9374.00 IOPS, 36.62 MiB/s [2024-12-06T15:33:13.254Z] 10467.56 IOPS, 40.89 MiB/s [2024-12-06T15:33:14.189Z] 11343.10 IOPS, 44.31 MiB/s [2024-12-06T15:33:15.124Z] 12057.64 IOPS, 47.10 MiB/s [2024-12-06T15:33:16.058Z] 12654.83 IOPS, 49.43 MiB/s [2024-12-06T15:33:16.995Z] 13161.15 IOPS, 51.41 MiB/s [2024-12-06T15:33:17.943Z] 13593.00 IOPS, 53.10 MiB/s [2024-12-06T15:33:17.943Z] 13968.53 IOPS, 54.56 MiB/s 00:20:23.215 Latency(us) 00:20:23.215 [2024-12-06T15:33:17.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.215 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:23.215 Verification LBA range: start 0x0 length 0x4000 00:20:23.215 Nvme1n1 : 15.01 13969.42 54.57 10989.43 0.00 5109.38 321.61 1025274.31 00:20:23.215 [2024-12-06T15:33:17.943Z] =================================================================================================================== 00:20:23.215 [2024-12-06T15:33:17.943Z] Total : 13969.42 54.57 10989.43 0.00 5109.38 321.61 1025274.31 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:23.475 rmmod nvme_rdma 00:20:23.475 rmmod nvme_fabrics 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3879388 ']' 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3879388 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3879388 ']' 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3879388 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.475 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3879388 00:20:23.734 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:23.734 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:23.734 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3879388' 00:20:23.734 killing process with pid 3879388 00:20:23.734 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3879388 00:20:23.734 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3879388 00:20:23.734 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:23.735 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:23.735 00:20:23.735 real 0m23.576s 00:20:23.735 user 1m1.723s 00:20:23.735 sys 0m5.158s 00:20:23.735 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.735 16:33:18 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:23.735 ************************************ 00:20:23.735 END TEST nvmf_bdevperf 00:20:23.735 ************************************ 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.994 ************************************ 00:20:23.994 START TEST nvmf_target_disconnect 00:20:23.994 ************************************ 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:20:23.994 * Looking for test storage... 00:20:23.994 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.994 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:23.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.995 --rc genhtml_branch_coverage=1 00:20:23.995 --rc genhtml_function_coverage=1 00:20:23.995 --rc genhtml_legend=1 00:20:23.995 --rc geninfo_all_blocks=1 00:20:23.995 --rc geninfo_unexecuted_blocks=1 00:20:23.995 00:20:23.995 ' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:23.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.995 --rc genhtml_branch_coverage=1 00:20:23.995 --rc genhtml_function_coverage=1 00:20:23.995 --rc genhtml_legend=1 00:20:23.995 --rc geninfo_all_blocks=1 00:20:23.995 --rc geninfo_unexecuted_blocks=1 00:20:23.995 00:20:23.995 ' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:23.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.995 --rc genhtml_branch_coverage=1 00:20:23.995 --rc genhtml_function_coverage=1 00:20:23.995 --rc genhtml_legend=1 00:20:23.995 --rc geninfo_all_blocks=1 00:20:23.995 --rc geninfo_unexecuted_blocks=1 00:20:23.995 00:20:23.995 ' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:23.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.995 --rc genhtml_branch_coverage=1 00:20:23.995 --rc genhtml_function_coverage=1 00:20:23.995 --rc genhtml_legend=1 00:20:23.995 --rc geninfo_all_blocks=1 00:20:23.995 --rc geninfo_unexecuted_blocks=1 00:20:23.995 00:20:23.995 ' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.995 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.995 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.996 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.996 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.996 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.996 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.996 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:23.996 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:23.996 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.996 16:33:18 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:30.564 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:30.564 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:30.564 Found net devices under 0000:18:00.0: mlx_0_0 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:30.564 Found net devices under 0000:18:00.1: mlx_0_1 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:30.564 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:30.565 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:30.565 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:20:30.565 altname enp24s0f0np0 00:20:30.565 altname ens785f0np0 00:20:30.565 inet 192.168.100.8/24 scope global mlx_0_0 00:20:30.565 valid_lft forever preferred_lft forever 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:30.565 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:30.565 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:20:30.565 altname enp24s0f1np1 00:20:30.565 altname ens785f1np1 00:20:30.565 inet 192.168.100.9/24 scope global mlx_0_1 00:20:30.565 valid_lft forever preferred_lft forever 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:30.565 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:30.566 192.168.100.9' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:30.566 192.168.100.9' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:30.566 192.168.100.9' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:30.566 ************************************ 00:20:30.566 START TEST nvmf_target_disconnect_tc1 00:20:30.566 ************************************ 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:20:30.566 16:33:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:30.566 [2024-12-06 16:33:24.507998] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:30.566 [2024-12-06 16:33:24.508033] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:30.566 [2024-12-06 16:33:24.508048] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:20:30.825 [2024-12-06 16:33:25.511876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:20:30.825 [2024-12-06 16:33:25.511947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:20:30.825 [2024-12-06 16:33:25.511974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:20:30.825 [2024-12-06 16:33:25.512026] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:30.825 [2024-12-06 16:33:25.512048] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:20:30.825 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:20:30.825 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:20:30.825 Initializing NVMe Controllers 00:20:30.825 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:20:30.825 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:30.825 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:30.825 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:30.825 00:20:30.825 real 0m1.120s 00:20:30.825 user 0m0.915s 00:20:30.825 sys 0m0.195s 00:20:30.825 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.825 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.825 ************************************ 00:20:30.825 END TEST nvmf_target_disconnect_tc1 00:20:30.825 ************************************ 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:31.083 ************************************ 00:20:31.083 START TEST nvmf_target_disconnect_tc2 00:20:31.083 ************************************ 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3884725 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3884725 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3884725 ']' 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.083 16:33:25 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:31.083 [2024-12-06 16:33:25.643658] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:20:31.083 [2024-12-06 16:33:25.643701] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.083 [2024-12-06 16:33:25.717024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:31.083 [2024-12-06 16:33:25.753981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.083 [2024-12-06 16:33:25.754019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.083 [2024-12-06 16:33:25.754026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.083 [2024-12-06 16:33:25.754031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.083 [2024-12-06 16:33:25.754036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.083 [2024-12-06 16:33:25.755332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:31.083 [2024-12-06 16:33:25.755442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:31.083 [2024-12-06 16:33:25.755561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:31.083 [2024-12-06 16:33:25.755561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.014 Malloc0 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.014 [2024-12-06 16:33:26.534002] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x184f170/0x185ae50) succeed. 00:20:32.014 [2024-12-06 16:33:26.542400] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1850800/0x189c4f0) succeed. 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.014 [2024-12-06 16:33:26.671681] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:32.014 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.015 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:32.015 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.015 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.015 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.015 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3884808 00:20:32.015 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:20:32.015 16:33:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:34.542 16:33:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3884725 00:20:34.542 16:33:28 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Read completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.477 Write completed with error (sct=0, sc=8) 00:20:35.477 starting I/O failed 00:20:35.478 Read completed with error (sct=0, sc=8) 00:20:35.478 starting I/O failed 00:20:35.478 Write completed with error (sct=0, sc=8) 00:20:35.478 starting I/O failed 00:20:35.478 Write completed with error (sct=0, sc=8) 00:20:35.478 starting I/O failed 00:20:35.478 Read completed with error (sct=0, sc=8) 00:20:35.478 starting I/O failed 00:20:35.478 Write completed with error (sct=0, sc=8) 00:20:35.478 starting I/O failed 00:20:35.478 [2024-12-06 16:33:29.858389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:36.045 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3884725 Killed "${NVMF_APP[@]}" "$@" 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3885591 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3885591 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3885591 ']' 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.045 16:33:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.045 [2024-12-06 16:33:30.746215] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:20:36.045 [2024-12-06 16:33:30.746261] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.304 [2024-12-06 16:33:30.820639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.304 [2024-12-06 16:33:30.857533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.304 [2024-12-06 16:33:30.857569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.304 [2024-12-06 16:33:30.857575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.304 [2024-12-06 16:33:30.857581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.304 [2024-12-06 16:33:30.857585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.304 [2024-12-06 16:33:30.858977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:36.304 [2024-12-06 16:33:30.859087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:36.304 [2024-12-06 16:33:30.859191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:36.304 [2024-12-06 16:33:30.859193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Read completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 Write completed with error (sct=0, sc=8) 00:20:36.304 starting I/O failed 00:20:36.304 [2024-12-06 16:33:30.863341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:36.304 [2024-12-06 16:33:30.864863] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:36.304 [2024-12-06 16:33:30.864882] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:36.304 [2024-12-06 16:33:30.864889] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:36.870 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.870 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:36.870 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.870 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.870 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.870 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.870 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:36.870 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.870 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:37.128 Malloc0 00:20:37.128 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.128 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:37.128 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.128 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:37.128 [2024-12-06 16:33:31.649541] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20d0170/0x20dbe50) succeed. 00:20:37.128 [2024-12-06 16:33:31.657948] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20d1800/0x211d4f0) succeed. 00:20:37.128 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.128 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.128 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.128 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:37.128 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:37.129 [2024-12-06 16:33:31.789726] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.129 16:33:31 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3884808 00:20:37.387 [2024-12-06 16:33:31.868764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.387 qpair failed and we were unable to recover it. 00:20:37.387 [2024-12-06 16:33:31.877862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.387 [2024-12-06 16:33:31.877915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.387 [2024-12-06 16:33:31.877934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.387 [2024-12-06 16:33:31.877942] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.387 [2024-12-06 16:33:31.877949] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.387 [2024-12-06 16:33:31.887913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.387 qpair failed and we were unable to recover it. 00:20:37.387 [2024-12-06 16:33:31.897819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.387 [2024-12-06 16:33:31.897854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.387 [2024-12-06 16:33:31.897871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.387 [2024-12-06 16:33:31.897878] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.387 [2024-12-06 16:33:31.897883] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.387 [2024-12-06 16:33:31.908125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.387 qpair failed and we were unable to recover it. 00:20:37.387 [2024-12-06 16:33:31.917832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.387 [2024-12-06 16:33:31.917873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.387 [2024-12-06 16:33:31.917889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.387 [2024-12-06 16:33:31.917896] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.387 [2024-12-06 16:33:31.917902] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.387 [2024-12-06 16:33:31.927993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.387 qpair failed and we were unable to recover it. 00:20:37.387 [2024-12-06 16:33:31.937841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.387 [2024-12-06 16:33:31.937879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.387 [2024-12-06 16:33:31.937894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.387 [2024-12-06 16:33:31.937901] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.387 [2024-12-06 16:33:31.937910] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.387 [2024-12-06 16:33:31.948264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.387 qpair failed and we were unable to recover it. 00:20:37.387 [2024-12-06 16:33:31.958037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.387 [2024-12-06 16:33:31.958079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.387 [2024-12-06 16:33:31.958094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.387 [2024-12-06 16:33:31.958101] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.387 [2024-12-06 16:33:31.958107] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.387 [2024-12-06 16:33:31.968432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.387 qpair failed and we were unable to recover it. 00:20:37.387 [2024-12-06 16:33:31.977974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.387 [2024-12-06 16:33:31.978008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.387 [2024-12-06 16:33:31.978024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.387 [2024-12-06 16:33:31.978031] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.387 [2024-12-06 16:33:31.978036] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.387 [2024-12-06 16:33:31.988408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.387 qpair failed and we were unable to recover it. 00:20:37.387 [2024-12-06 16:33:31.998052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.387 [2024-12-06 16:33:31.998089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.387 [2024-12-06 16:33:31.998105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.387 [2024-12-06 16:33:31.998112] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.387 [2024-12-06 16:33:31.998118] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.387 [2024-12-06 16:33:32.008319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.387 qpair failed and we were unable to recover it. 00:20:37.387 [2024-12-06 16:33:32.018142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.387 [2024-12-06 16:33:32.018182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.387 [2024-12-06 16:33:32.018198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.387 [2024-12-06 16:33:32.018205] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.387 [2024-12-06 16:33:32.018211] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.387 [2024-12-06 16:33:32.028550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.387 qpair failed and we were unable to recover it. 00:20:37.387 [2024-12-06 16:33:32.038154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.387 [2024-12-06 16:33:32.038198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.387 [2024-12-06 16:33:32.038214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.387 [2024-12-06 16:33:32.038220] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.387 [2024-12-06 16:33:32.038226] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.387 [2024-12-06 16:33:32.048573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.387 qpair failed and we were unable to recover it. 00:20:37.387 [2024-12-06 16:33:32.058164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.387 [2024-12-06 16:33:32.058200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.387 [2024-12-06 16:33:32.058216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.387 [2024-12-06 16:33:32.058222] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.387 [2024-12-06 16:33:32.058228] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.387 [2024-12-06 16:33:32.068499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.388 qpair failed and we were unable to recover it. 00:20:37.388 [2024-12-06 16:33:32.078327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.388 [2024-12-06 16:33:32.078359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.388 [2024-12-06 16:33:32.078379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.388 [2024-12-06 16:33:32.078386] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.388 [2024-12-06 16:33:32.078392] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.388 [2024-12-06 16:33:32.088694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.388 qpair failed and we were unable to recover it. 00:20:37.388 [2024-12-06 16:33:32.098361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.388 [2024-12-06 16:33:32.098402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.388 [2024-12-06 16:33:32.098419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.388 [2024-12-06 16:33:32.098425] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.388 [2024-12-06 16:33:32.098431] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.388 [2024-12-06 16:33:32.108874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.388 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.118366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.118409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.118427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.118434] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.118439] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.128708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.138441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.138481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.138497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.138503] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.138509] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.148998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.158527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.158566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.158582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.158589] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.158594] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.168959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.178629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.178668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.178683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.178689] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.178695] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.188851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.198743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.198778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.198794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.198804] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.198809] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.208921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.218831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.218866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.218882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.218889] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.218895] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.229208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.238778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.238817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.238833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.238840] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.238845] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.249080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.258859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.258896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.258911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.258918] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.258923] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.269049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.278916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.278957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.278973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.278979] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.278985] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.289133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.298964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.298996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.299011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.299018] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.299023] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.309258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.319043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.319081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.319096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.319103] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.647 [2024-12-06 16:33:32.319108] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.647 [2024-12-06 16:33:32.329316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.647 qpair failed and we were unable to recover it. 00:20:37.647 [2024-12-06 16:33:32.339177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.647 [2024-12-06 16:33:32.339217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.647 [2024-12-06 16:33:32.339233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.647 [2024-12-06 16:33:32.339240] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.648 [2024-12-06 16:33:32.339246] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.648 [2024-12-06 16:33:32.349339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.648 qpair failed and we were unable to recover it. 00:20:37.648 [2024-12-06 16:33:32.359079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.648 [2024-12-06 16:33:32.359119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.648 [2024-12-06 16:33:32.359134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.648 [2024-12-06 16:33:32.359140] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.648 [2024-12-06 16:33:32.359145] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.648 [2024-12-06 16:33:32.369403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.648 qpair failed and we were unable to recover it. 00:20:37.915 [2024-12-06 16:33:32.379295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.915 [2024-12-06 16:33:32.379332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.915 [2024-12-06 16:33:32.379347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.915 [2024-12-06 16:33:32.379353] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.915 [2024-12-06 16:33:32.379359] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.915 [2024-12-06 16:33:32.389633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.915 qpair failed and we were unable to recover it. 00:20:37.915 [2024-12-06 16:33:32.399252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.915 [2024-12-06 16:33:32.399290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.915 [2024-12-06 16:33:32.399306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.915 [2024-12-06 16:33:32.399312] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.915 [2024-12-06 16:33:32.399318] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.915 [2024-12-06 16:33:32.409524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.915 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.419400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.419439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.419455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.419461] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.419467] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.429615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.916 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.439380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.439416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.439432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.439438] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.439444] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.449568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.916 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.459500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.459543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.459561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.459568] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.459574] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.469774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.916 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.479450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.479487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.479502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.479509] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.479514] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.489752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.916 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.499520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.499559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.499574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.499581] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.499586] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.509894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.916 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.519642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.519682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.519698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.519705] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.519710] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.529928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.916 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.539584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.539620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.539635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.539646] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.539652] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.550114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.916 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.559731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.559770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.559785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.559791] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.559797] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.569993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.916 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.579843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.579879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.579894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.579901] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.579907] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.589939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.916 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.599929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.599974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.599989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.599996] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.600002] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.609991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.916 qpair failed and we were unable to recover it. 00:20:37.916 [2024-12-06 16:33:32.620012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.916 [2024-12-06 16:33:32.620051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.916 [2024-12-06 16:33:32.620066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.916 [2024-12-06 16:33:32.620073] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.916 [2024-12-06 16:33:32.620078] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:37.916 [2024-12-06 16:33:32.630171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:37.917 qpair failed and we were unable to recover it. 00:20:37.917 [2024-12-06 16:33:32.639916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:37.917 [2024-12-06 16:33:32.639957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:37.917 [2024-12-06 16:33:32.639972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:37.917 [2024-12-06 16:33:32.639979] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:37.917 [2024-12-06 16:33:32.639984] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.650160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.659988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.660027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.660042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.660048] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.660054] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.670310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.680025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.680061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.680076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.680083] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.680088] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.690183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.700150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.700185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.700201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.700207] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.700213] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.710253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.720061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.720099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.720115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.720122] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.720128] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.730514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.740143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.740183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.740199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.740206] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.740212] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.750622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.760257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.760299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.760315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.760322] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.760327] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.770671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.780285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.780323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.780339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.780346] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.780352] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.790644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.800332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.800371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.800394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.800400] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.800406] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.810626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.820439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.820479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.820496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.820503] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.820509] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.830798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.840488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.840530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.840545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.840552] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.840558] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.850687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.860504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.860542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.860558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.860564] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.176 [2024-12-06 16:33:32.860570] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.176 [2024-12-06 16:33:32.870914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.176 qpair failed and we were unable to recover it. 00:20:38.176 [2024-12-06 16:33:32.880523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.176 [2024-12-06 16:33:32.880556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.176 [2024-12-06 16:33:32.880572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.176 [2024-12-06 16:33:32.880579] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.177 [2024-12-06 16:33:32.880587] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.177 [2024-12-06 16:33:32.890931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.177 qpair failed and we were unable to recover it. 00:20:38.177 [2024-12-06 16:33:32.900618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.177 [2024-12-06 16:33:32.900657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.177 [2024-12-06 16:33:32.900672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.177 [2024-12-06 16:33:32.900678] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.177 [2024-12-06 16:33:32.900684] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.435 [2024-12-06 16:33:32.910926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.435 qpair failed and we were unable to recover it. 00:20:38.435 [2024-12-06 16:33:32.920680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.435 [2024-12-06 16:33:32.920718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.435 [2024-12-06 16:33:32.920735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.435 [2024-12-06 16:33:32.920741] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.435 [2024-12-06 16:33:32.920747] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.435 [2024-12-06 16:33:32.931058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.435 qpair failed and we were unable to recover it. 00:20:38.435 [2024-12-06 16:33:32.940796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.435 [2024-12-06 16:33:32.940838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.435 [2024-12-06 16:33:32.940854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.435 [2024-12-06 16:33:32.940860] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.435 [2024-12-06 16:33:32.940866] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.435 [2024-12-06 16:33:32.951072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.435 qpair failed and we were unable to recover it. 00:20:38.435 [2024-12-06 16:33:32.960617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.435 [2024-12-06 16:33:32.960650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.435 [2024-12-06 16:33:32.960665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.435 [2024-12-06 16:33:32.960672] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.435 [2024-12-06 16:33:32.960677] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.435 [2024-12-06 16:33:32.971043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.435 qpair failed and we were unable to recover it. 00:20:38.435 [2024-12-06 16:33:32.980871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.435 [2024-12-06 16:33:32.980910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.435 [2024-12-06 16:33:32.980925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.435 [2024-12-06 16:33:32.980932] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.435 [2024-12-06 16:33:32.980938] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.435 [2024-12-06 16:33:32.991485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.435 qpair failed and we were unable to recover it. 00:20:38.435 [2024-12-06 16:33:33.000903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.435 [2024-12-06 16:33:33.000938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.435 [2024-12-06 16:33:33.000953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.435 [2024-12-06 16:33:33.000960] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.436 [2024-12-06 16:33:33.000966] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.436 [2024-12-06 16:33:33.011162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.436 qpair failed and we were unable to recover it. 00:20:38.436 [2024-12-06 16:33:33.020976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.436 [2024-12-06 16:33:33.021015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.436 [2024-12-06 16:33:33.021031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.436 [2024-12-06 16:33:33.021038] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.436 [2024-12-06 16:33:33.021043] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.436 [2024-12-06 16:33:33.031196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.436 qpair failed and we were unable to recover it. 00:20:38.436 [2024-12-06 16:33:33.041037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.436 [2024-12-06 16:33:33.041076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.436 [2024-12-06 16:33:33.041091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.436 [2024-12-06 16:33:33.041097] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.436 [2024-12-06 16:33:33.041103] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.436 [2024-12-06 16:33:33.051247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.436 qpair failed and we were unable to recover it. 00:20:38.436 [2024-12-06 16:33:33.061137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.436 [2024-12-06 16:33:33.061178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.436 [2024-12-06 16:33:33.061193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.436 [2024-12-06 16:33:33.061200] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.436 [2024-12-06 16:33:33.061206] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.436 [2024-12-06 16:33:33.071487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.436 qpair failed and we were unable to recover it. 00:20:38.436 [2024-12-06 16:33:33.081143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.436 [2024-12-06 16:33:33.081185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.436 [2024-12-06 16:33:33.081200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.436 [2024-12-06 16:33:33.081207] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.436 [2024-12-06 16:33:33.081212] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.436 [2024-12-06 16:33:33.091481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.436 qpair failed and we were unable to recover it. 00:20:38.436 [2024-12-06 16:33:33.101224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.436 [2024-12-06 16:33:33.101266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.436 [2024-12-06 16:33:33.101281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.436 [2024-12-06 16:33:33.101287] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.436 [2024-12-06 16:33:33.101293] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.436 [2024-12-06 16:33:33.111675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.436 qpair failed and we were unable to recover it. 00:20:38.436 [2024-12-06 16:33:33.121206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.436 [2024-12-06 16:33:33.121239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.436 [2024-12-06 16:33:33.121255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.436 [2024-12-06 16:33:33.121262] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.436 [2024-12-06 16:33:33.121267] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.436 [2024-12-06 16:33:33.131620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.436 qpair failed and we were unable to recover it. 00:20:38.436 [2024-12-06 16:33:33.141292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.436 [2024-12-06 16:33:33.141332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.436 [2024-12-06 16:33:33.141351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.436 [2024-12-06 16:33:33.141358] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.436 [2024-12-06 16:33:33.141363] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.436 [2024-12-06 16:33:33.151717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.436 qpair failed and we were unable to recover it. 00:20:38.436 [2024-12-06 16:33:33.161347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.436 [2024-12-06 16:33:33.161390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.436 [2024-12-06 16:33:33.161405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.436 [2024-12-06 16:33:33.161411] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.436 [2024-12-06 16:33:33.161417] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.171808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.181438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.181474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.181490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.181496] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.181501] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.191781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.201443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.201475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.201491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.201498] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.201503] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.211711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.221574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.221611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.221627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.221633] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.221642] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.231942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.241588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.241627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.241643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.241649] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.241655] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.252068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.261669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.261702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.261718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.261724] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.261730] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.271989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.281709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.281743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.281759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.281766] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.281772] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.291878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.301805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.301846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.301861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.301869] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.301874] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.312039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.321863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.321900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.321916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.321922] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.321928] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.332029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.341904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.341943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.341958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.341965] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.341971] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.352211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.361903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.361942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.361958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.361964] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.361970] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.372378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.382022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.695 [2024-12-06 16:33:33.382059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.695 [2024-12-06 16:33:33.382076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.695 [2024-12-06 16:33:33.382082] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.695 [2024-12-06 16:33:33.382088] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.695 [2024-12-06 16:33:33.392402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.695 qpair failed and we were unable to recover it. 00:20:38.695 [2024-12-06 16:33:33.402075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.696 [2024-12-06 16:33:33.402118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.696 [2024-12-06 16:33:33.402134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.696 [2024-12-06 16:33:33.402140] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.696 [2024-12-06 16:33:33.402146] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.696 [2024-12-06 16:33:33.412338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.696 qpair failed and we were unable to recover it. 00:20:38.954 [2024-12-06 16:33:33.422149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.954 [2024-12-06 16:33:33.422198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.954 [2024-12-06 16:33:33.422213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.954 [2024-12-06 16:33:33.422220] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.954 [2024-12-06 16:33:33.422225] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.954 [2024-12-06 16:33:33.432562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.954 qpair failed and we were unable to recover it. 00:20:38.954 [2024-12-06 16:33:33.442203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.954 [2024-12-06 16:33:33.442244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.954 [2024-12-06 16:33:33.442260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.954 [2024-12-06 16:33:33.442266] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.954 [2024-12-06 16:33:33.442272] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.954 [2024-12-06 16:33:33.452475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.954 qpair failed and we were unable to recover it. 00:20:38.954 [2024-12-06 16:33:33.462191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.954 [2024-12-06 16:33:33.462231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.954 [2024-12-06 16:33:33.462246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.954 [2024-12-06 16:33:33.462252] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.954 [2024-12-06 16:33:33.462258] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.954 [2024-12-06 16:33:33.472685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.954 qpair failed and we were unable to recover it. 00:20:38.954 [2024-12-06 16:33:33.482225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.954 [2024-12-06 16:33:33.482268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.954 [2024-12-06 16:33:33.482283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.954 [2024-12-06 16:33:33.482293] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.954 [2024-12-06 16:33:33.482298] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.954 [2024-12-06 16:33:33.492621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.954 qpair failed and we were unable to recover it. 00:20:38.954 [2024-12-06 16:33:33.502426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.954 [2024-12-06 16:33:33.502461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.954 [2024-12-06 16:33:33.502477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.954 [2024-12-06 16:33:33.502484] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.954 [2024-12-06 16:33:33.502489] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.954 [2024-12-06 16:33:33.512640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.954 qpair failed and we were unable to recover it. 00:20:38.954 [2024-12-06 16:33:33.522339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.954 [2024-12-06 16:33:33.522377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.954 [2024-12-06 16:33:33.522394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.954 [2024-12-06 16:33:33.522400] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.954 [2024-12-06 16:33:33.522406] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.954 [2024-12-06 16:33:33.532728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.954 qpair failed and we were unable to recover it. 00:20:38.954 [2024-12-06 16:33:33.542444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.954 [2024-12-06 16:33:33.542485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.954 [2024-12-06 16:33:33.542500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.954 [2024-12-06 16:33:33.542507] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.954 [2024-12-06 16:33:33.542512] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.954 [2024-12-06 16:33:33.552879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.954 qpair failed and we were unable to recover it. 00:20:38.954 [2024-12-06 16:33:33.562563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.954 [2024-12-06 16:33:33.562601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.954 [2024-12-06 16:33:33.562617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.954 [2024-12-06 16:33:33.562623] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.954 [2024-12-06 16:33:33.562632] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.954 [2024-12-06 16:33:33.572906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.954 qpair failed and we were unable to recover it. 00:20:38.954 [2024-12-06 16:33:33.582587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.954 [2024-12-06 16:33:33.582620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.955 [2024-12-06 16:33:33.582636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.955 [2024-12-06 16:33:33.582642] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.955 [2024-12-06 16:33:33.582648] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.955 [2024-12-06 16:33:33.592999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.955 qpair failed and we were unable to recover it. 00:20:38.955 [2024-12-06 16:33:33.602669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.955 [2024-12-06 16:33:33.602708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.955 [2024-12-06 16:33:33.602724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.955 [2024-12-06 16:33:33.602731] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.955 [2024-12-06 16:33:33.602736] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.955 [2024-12-06 16:33:33.612932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.955 qpair failed and we were unable to recover it. 00:20:38.955 [2024-12-06 16:33:33.622774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.955 [2024-12-06 16:33:33.622811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.955 [2024-12-06 16:33:33.622826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.955 [2024-12-06 16:33:33.622833] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.955 [2024-12-06 16:33:33.622838] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.955 [2024-12-06 16:33:33.633473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.955 qpair failed and we were unable to recover it. 00:20:38.955 [2024-12-06 16:33:33.642873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.955 [2024-12-06 16:33:33.642908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.955 [2024-12-06 16:33:33.642923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.955 [2024-12-06 16:33:33.642930] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.955 [2024-12-06 16:33:33.642935] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.955 [2024-12-06 16:33:33.653131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.955 qpair failed and we were unable to recover it. 00:20:38.955 [2024-12-06 16:33:33.662850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:38.955 [2024-12-06 16:33:33.662889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:38.955 [2024-12-06 16:33:33.662904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:38.955 [2024-12-06 16:33:33.662911] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:38.955 [2024-12-06 16:33:33.662916] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:38.955 [2024-12-06 16:33:33.673181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:38.955 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.683005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.213 [2024-12-06 16:33:33.683045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.213 [2024-12-06 16:33:33.683060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.213 [2024-12-06 16:33:33.683066] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.213 [2024-12-06 16:33:33.683072] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.213 [2024-12-06 16:33:33.693222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.213 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.702843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.213 [2024-12-06 16:33:33.702880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.213 [2024-12-06 16:33:33.702896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.213 [2024-12-06 16:33:33.702903] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.213 [2024-12-06 16:33:33.702908] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.213 [2024-12-06 16:33:33.713148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.213 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.723031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.213 [2024-12-06 16:33:33.723074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.213 [2024-12-06 16:33:33.723090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.213 [2024-12-06 16:33:33.723096] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.213 [2024-12-06 16:33:33.723102] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.213 [2024-12-06 16:33:33.733310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.213 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.742887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.213 [2024-12-06 16:33:33.742926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.213 [2024-12-06 16:33:33.742945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.213 [2024-12-06 16:33:33.742952] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.213 [2024-12-06 16:33:33.742957] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.213 [2024-12-06 16:33:33.753572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.213 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.763160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.213 [2024-12-06 16:33:33.763196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.213 [2024-12-06 16:33:33.763211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.213 [2024-12-06 16:33:33.763217] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.213 [2024-12-06 16:33:33.763222] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.213 [2024-12-06 16:33:33.773513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.213 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.783184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.213 [2024-12-06 16:33:33.783222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.213 [2024-12-06 16:33:33.783237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.213 [2024-12-06 16:33:33.783243] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.213 [2024-12-06 16:33:33.783249] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.213 [2024-12-06 16:33:33.793453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.213 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.803363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.213 [2024-12-06 16:33:33.803403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.213 [2024-12-06 16:33:33.803418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.213 [2024-12-06 16:33:33.803425] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.213 [2024-12-06 16:33:33.803430] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.213 [2024-12-06 16:33:33.813652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.213 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.823271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.213 [2024-12-06 16:33:33.823313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.213 [2024-12-06 16:33:33.823328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.213 [2024-12-06 16:33:33.823337] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.213 [2024-12-06 16:33:33.823343] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.213 [2024-12-06 16:33:33.833757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.213 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.843349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.213 [2024-12-06 16:33:33.843390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.213 [2024-12-06 16:33:33.843406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.213 [2024-12-06 16:33:33.843413] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.213 [2024-12-06 16:33:33.843419] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.213 [2024-12-06 16:33:33.853558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.213 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.863433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.213 [2024-12-06 16:33:33.863471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.213 [2024-12-06 16:33:33.863486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.213 [2024-12-06 16:33:33.863493] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.213 [2024-12-06 16:33:33.863499] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.213 [2024-12-06 16:33:33.873721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.213 qpair failed and we were unable to recover it. 00:20:39.213 [2024-12-06 16:33:33.883474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.214 [2024-12-06 16:33:33.883509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.214 [2024-12-06 16:33:33.883525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.214 [2024-12-06 16:33:33.883531] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.214 [2024-12-06 16:33:33.883537] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.214 [2024-12-06 16:33:33.893703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.214 qpair failed and we were unable to recover it. 00:20:39.214 [2024-12-06 16:33:33.903572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.214 [2024-12-06 16:33:33.903611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.214 [2024-12-06 16:33:33.903627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.214 [2024-12-06 16:33:33.903634] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.214 [2024-12-06 16:33:33.903640] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.214 [2024-12-06 16:33:33.913763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.214 qpair failed and we were unable to recover it. 00:20:39.214 [2024-12-06 16:33:33.923614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.214 [2024-12-06 16:33:33.923651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.214 [2024-12-06 16:33:33.923666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.214 [2024-12-06 16:33:33.923673] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.214 [2024-12-06 16:33:33.923678] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.214 [2024-12-06 16:33:33.933960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.214 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:33.943707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:33.943747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:33.943761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:33.943768] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:33.943774] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:33.954144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:33.963815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:33.963853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:33.963868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:33.963875] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:33.963881] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:33.973972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:33.983853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:33.983890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:33.983905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:33.983912] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:33.983918] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:33.994137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:34.003880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:34.003919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:34.003933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:34.003940] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:34.003945] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:34.014026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:34.023897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:34.023938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:34.023954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:34.023960] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:34.023966] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:34.034328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:34.044031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:34.044068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:34.044084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:34.044090] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:34.044096] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:34.054048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:34.064063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:34.064102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:34.064119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:34.064126] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:34.064131] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:34.074442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:34.084198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:34.084238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:34.084256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:34.084262] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:34.084268] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:34.094440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:34.104209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:34.104249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:34.104265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:34.104273] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:34.104278] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:34.114511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:34.124190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:34.124227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:34.124243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:34.124249] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:34.124256] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:34.134359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:34.144308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:34.144350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:34.144365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:34.144372] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.472 [2024-12-06 16:33:34.144382] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.472 [2024-12-06 16:33:34.154669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.472 qpair failed and we were unable to recover it. 00:20:39.472 [2024-12-06 16:33:34.164365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.472 [2024-12-06 16:33:34.164409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.472 [2024-12-06 16:33:34.164424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.472 [2024-12-06 16:33:34.164434] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.473 [2024-12-06 16:33:34.164440] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.473 [2024-12-06 16:33:34.174680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.473 qpair failed and we were unable to recover it. 00:20:39.473 [2024-12-06 16:33:34.184442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.473 [2024-12-06 16:33:34.184480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.473 [2024-12-06 16:33:34.184495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.473 [2024-12-06 16:33:34.184502] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.473 [2024-12-06 16:33:34.184508] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.473 [2024-12-06 16:33:34.194705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.473 qpair failed and we were unable to recover it. 00:20:39.731 [2024-12-06 16:33:34.204446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.731 [2024-12-06 16:33:34.204488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.731 [2024-12-06 16:33:34.204503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.731 [2024-12-06 16:33:34.204509] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.731 [2024-12-06 16:33:34.204515] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.731 [2024-12-06 16:33:34.214794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.731 qpair failed and we were unable to recover it. 00:20:39.731 [2024-12-06 16:33:34.224438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.731 [2024-12-06 16:33:34.224479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.731 [2024-12-06 16:33:34.224495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.731 [2024-12-06 16:33:34.224502] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.731 [2024-12-06 16:33:34.224507] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.731 [2024-12-06 16:33:34.234876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.731 qpair failed and we were unable to recover it. 00:20:39.731 [2024-12-06 16:33:34.244672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.731 [2024-12-06 16:33:34.244713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.731 [2024-12-06 16:33:34.244729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.731 [2024-12-06 16:33:34.244736] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.731 [2024-12-06 16:33:34.244741] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.731 [2024-12-06 16:33:34.254707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.731 qpair failed and we were unable to recover it. 00:20:39.731 [2024-12-06 16:33:34.264629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.731 [2024-12-06 16:33:34.264669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.731 [2024-12-06 16:33:34.264685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.731 [2024-12-06 16:33:34.264692] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.731 [2024-12-06 16:33:34.264698] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.731 [2024-12-06 16:33:34.275302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.731 qpair failed and we were unable to recover it. 00:20:39.731 [2024-12-06 16:33:34.284696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.731 [2024-12-06 16:33:34.284739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.731 [2024-12-06 16:33:34.284754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.731 [2024-12-06 16:33:34.284761] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.731 [2024-12-06 16:33:34.284767] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.731 [2024-12-06 16:33:34.294964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.731 qpair failed and we were unable to recover it. 00:20:39.731 [2024-12-06 16:33:34.304794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.731 [2024-12-06 16:33:34.304831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.731 [2024-12-06 16:33:34.304847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.731 [2024-12-06 16:33:34.304853] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.731 [2024-12-06 16:33:34.304859] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.731 [2024-12-06 16:33:34.315214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.731 qpair failed and we were unable to recover it. 00:20:39.731 [2024-12-06 16:33:34.324818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.731 [2024-12-06 16:33:34.324856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.731 [2024-12-06 16:33:34.324872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.731 [2024-12-06 16:33:34.324878] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.731 [2024-12-06 16:33:34.324883] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.731 [2024-12-06 16:33:34.335082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.731 qpair failed and we were unable to recover it. 00:20:39.731 [2024-12-06 16:33:34.344903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.731 [2024-12-06 16:33:34.344942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.731 [2024-12-06 16:33:34.344957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.731 [2024-12-06 16:33:34.344963] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.731 [2024-12-06 16:33:34.344969] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.731 [2024-12-06 16:33:34.355273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.731 qpair failed and we were unable to recover it. 00:20:39.731 [2024-12-06 16:33:34.364950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.731 [2024-12-06 16:33:34.364989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.731 [2024-12-06 16:33:34.365004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.731 [2024-12-06 16:33:34.365011] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.731 [2024-12-06 16:33:34.365016] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.731 [2024-12-06 16:33:34.375209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.731 qpair failed and we were unable to recover it. 00:20:39.731 [2024-12-06 16:33:34.384960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.731 [2024-12-06 16:33:34.384998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.731 [2024-12-06 16:33:34.385014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.731 [2024-12-06 16:33:34.385021] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.732 [2024-12-06 16:33:34.385026] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.732 [2024-12-06 16:33:34.395369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.732 qpair failed and we were unable to recover it. 00:20:39.732 [2024-12-06 16:33:34.404992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.732 [2024-12-06 16:33:34.405031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.732 [2024-12-06 16:33:34.405046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.732 [2024-12-06 16:33:34.405053] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.732 [2024-12-06 16:33:34.405059] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.732 [2024-12-06 16:33:34.415411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.732 qpair failed and we were unable to recover it. 00:20:39.732 [2024-12-06 16:33:34.425067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.732 [2024-12-06 16:33:34.425106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.732 [2024-12-06 16:33:34.425125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.732 [2024-12-06 16:33:34.425132] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.732 [2024-12-06 16:33:34.425137] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.732 [2024-12-06 16:33:34.435417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.732 qpair failed and we were unable to recover it. 00:20:39.732 [2024-12-06 16:33:34.445208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.732 [2024-12-06 16:33:34.445249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.732 [2024-12-06 16:33:34.445263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.732 [2024-12-06 16:33:34.445270] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.732 [2024-12-06 16:33:34.445275] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.732 [2024-12-06 16:33:34.455476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.732 qpair failed and we were unable to recover it. 00:20:39.991 [2024-12-06 16:33:34.465234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.991 [2024-12-06 16:33:34.465271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.991 [2024-12-06 16:33:34.465285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.991 [2024-12-06 16:33:34.465291] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.991 [2024-12-06 16:33:34.465297] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.991 [2024-12-06 16:33:34.475501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.991 qpair failed and we were unable to recover it. 00:20:39.991 [2024-12-06 16:33:34.485223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.991 [2024-12-06 16:33:34.485263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.991 [2024-12-06 16:33:34.485278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.991 [2024-12-06 16:33:34.485286] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.991 [2024-12-06 16:33:34.485291] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.991 [2024-12-06 16:33:34.495597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.991 qpair failed and we were unable to recover it. 00:20:39.991 [2024-12-06 16:33:34.505324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.991 [2024-12-06 16:33:34.505364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.991 [2024-12-06 16:33:34.505384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.991 [2024-12-06 16:33:34.505391] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.991 [2024-12-06 16:33:34.505400] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.991 [2024-12-06 16:33:34.515687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.991 qpair failed and we were unable to recover it. 00:20:39.991 [2024-12-06 16:33:34.525362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.991 [2024-12-06 16:33:34.525404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.991 [2024-12-06 16:33:34.525420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.991 [2024-12-06 16:33:34.525427] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.991 [2024-12-06 16:33:34.525432] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.991 [2024-12-06 16:33:34.535743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.991 qpair failed and we were unable to recover it. 00:20:39.992 [2024-12-06 16:33:34.545387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.992 [2024-12-06 16:33:34.545426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.992 [2024-12-06 16:33:34.545442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.992 [2024-12-06 16:33:34.545449] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.992 [2024-12-06 16:33:34.545454] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.992 [2024-12-06 16:33:34.555745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.992 qpair failed and we were unable to recover it. 00:20:39.992 [2024-12-06 16:33:34.565450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.992 [2024-12-06 16:33:34.565488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.992 [2024-12-06 16:33:34.565503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.992 [2024-12-06 16:33:34.565510] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.992 [2024-12-06 16:33:34.565516] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.992 [2024-12-06 16:33:34.575768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.992 qpair failed and we were unable to recover it. 00:20:39.992 [2024-12-06 16:33:34.585454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.992 [2024-12-06 16:33:34.585494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.992 [2024-12-06 16:33:34.585509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.992 [2024-12-06 16:33:34.585516] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.992 [2024-12-06 16:33:34.585522] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.992 [2024-12-06 16:33:34.595914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.992 qpair failed and we were unable to recover it. 00:20:39.992 [2024-12-06 16:33:34.605504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.992 [2024-12-06 16:33:34.605540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.992 [2024-12-06 16:33:34.605556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.992 [2024-12-06 16:33:34.605563] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.992 [2024-12-06 16:33:34.605569] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.992 [2024-12-06 16:33:34.615748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.992 qpair failed and we were unable to recover it. 00:20:39.992 [2024-12-06 16:33:34.625702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.992 [2024-12-06 16:33:34.625741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.992 [2024-12-06 16:33:34.625757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.992 [2024-12-06 16:33:34.625764] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.992 [2024-12-06 16:33:34.625770] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.992 [2024-12-06 16:33:34.635752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.992 qpair failed and we were unable to recover it. 00:20:39.992 [2024-12-06 16:33:34.645768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.992 [2024-12-06 16:33:34.645807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.992 [2024-12-06 16:33:34.645822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.992 [2024-12-06 16:33:34.645829] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.992 [2024-12-06 16:33:34.645834] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.992 [2024-12-06 16:33:34.655889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.992 qpair failed and we were unable to recover it. 00:20:39.992 [2024-12-06 16:33:34.665712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.992 [2024-12-06 16:33:34.665750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.992 [2024-12-06 16:33:34.665765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.992 [2024-12-06 16:33:34.665771] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.992 [2024-12-06 16:33:34.665777] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.992 [2024-12-06 16:33:34.676026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.992 qpair failed and we were unable to recover it. 00:20:39.992 [2024-12-06 16:33:34.685693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.992 [2024-12-06 16:33:34.685735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.992 [2024-12-06 16:33:34.685750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.992 [2024-12-06 16:33:34.685757] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.992 [2024-12-06 16:33:34.685763] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.992 [2024-12-06 16:33:34.696093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.992 qpair failed and we were unable to recover it. 00:20:39.992 [2024-12-06 16:33:34.705902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:39.992 [2024-12-06 16:33:34.705941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:39.992 [2024-12-06 16:33:34.705957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:39.992 [2024-12-06 16:33:34.705963] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:39.992 [2024-12-06 16:33:34.705969] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:39.992 [2024-12-06 16:33:34.716263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.992 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.725851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.725886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.725901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.725907] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.725913] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.736251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.746051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.746088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.746104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.746111] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.746116] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.756290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.766122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.766164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.766183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.766189] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.766195] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.776377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.786031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.786074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.786090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.786097] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.786103] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.796397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.806069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.806102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.806116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.806123] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.806129] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.816513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.826134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.826174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.826189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.826196] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.826202] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.836503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.846239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.846278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.846294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.846300] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.846309] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.856403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.866296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.866332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.866348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.866354] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.866360] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.876827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.886309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.886346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.886362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.886368] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.886385] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.896748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.906463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.906500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.906515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.906521] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.906527] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.917079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.926479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.926516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.252 [2024-12-06 16:33:34.926532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.252 [2024-12-06 16:33:34.926538] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.252 [2024-12-06 16:33:34.926544] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.252 [2024-12-06 16:33:34.936863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.252 qpair failed and we were unable to recover it. 00:20:40.252 [2024-12-06 16:33:34.946498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.252 [2024-12-06 16:33:34.946532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.253 [2024-12-06 16:33:34.946547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.253 [2024-12-06 16:33:34.946554] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.253 [2024-12-06 16:33:34.946559] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.253 [2024-12-06 16:33:34.956859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.253 qpair failed and we were unable to recover it. 00:20:40.253 [2024-12-06 16:33:34.966576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.253 [2024-12-06 16:33:34.966615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.253 [2024-12-06 16:33:34.966630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.253 [2024-12-06 16:33:34.966637] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.253 [2024-12-06 16:33:34.966643] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.253 [2024-12-06 16:33:34.976942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.253 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:34.986581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:34.986621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:34.986635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:34.986642] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:34.986647] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:34.996937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.006666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.006702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.006718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.006725] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.006731] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.017248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.026792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.026830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.026846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.026852] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.026858] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.037204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.046852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.046887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.046902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.046908] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.046914] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.057076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.066933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.066972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.066987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.066993] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.066999] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.077185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.086950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.086989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.087005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.087011] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.087017] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.097353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.106988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.107021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.107039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.107046] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.107051] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.117385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.127100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.127139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.127154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.127161] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.127167] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.137310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.147169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.147210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.147226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.147232] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.147238] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.157550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.167329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.167366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.167390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.167397] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.167403] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.177486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.187218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.187256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.187271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.187278] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.187286] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.197631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.207343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.207383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.207399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.207405] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.207411] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.217682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.515 [2024-12-06 16:33:35.227368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.515 [2024-12-06 16:33:35.227412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.515 [2024-12-06 16:33:35.227428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.515 [2024-12-06 16:33:35.227435] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.515 [2024-12-06 16:33:35.227440] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.515 [2024-12-06 16:33:35.237805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.515 qpair failed and we were unable to recover it. 00:20:40.775 [2024-12-06 16:33:35.247478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.775 [2024-12-06 16:33:35.247522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.247538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.247545] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.247550] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.257804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.267586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.267625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.267641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.267647] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.267653] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.277904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.287540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.287577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.287593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.287599] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.287605] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.297786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.307711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.307750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.307766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.307773] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.307778] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.317899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.327686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.327726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.327743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.327749] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.327755] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.337975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.347664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.347701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.347717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.347723] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.347729] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.358025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.367660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.367695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.367711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.367717] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.367723] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.378064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.387826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.387867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.387882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.387889] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.387894] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.398283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.407963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.408006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.408023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.408030] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.408036] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.418270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.427943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.427981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.427997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.428003] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.428009] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.438315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.448101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.448134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.448150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.448159] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.448165] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.458262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.468096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.468135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.468151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.468158] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.776 [2024-12-06 16:33:35.468163] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.776 [2024-12-06 16:33:35.478402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.776 qpair failed and we were unable to recover it. 00:20:40.776 [2024-12-06 16:33:35.488100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:40.776 [2024-12-06 16:33:35.488139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:40.776 [2024-12-06 16:33:35.488155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:40.776 [2024-12-06 16:33:35.488161] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:40.777 [2024-12-06 16:33:35.488166] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:40.777 [2024-12-06 16:33:35.498347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:40.777 qpair failed and we were unable to recover it. 00:20:41.035 [2024-12-06 16:33:35.508212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.035 [2024-12-06 16:33:35.508247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.035 [2024-12-06 16:33:35.508262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.035 [2024-12-06 16:33:35.508268] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.035 [2024-12-06 16:33:35.508273] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.035 [2024-12-06 16:33:35.518348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.035 qpair failed and we were unable to recover it. 00:20:41.035 [2024-12-06 16:33:35.528183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.035 [2024-12-06 16:33:35.528219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.035 [2024-12-06 16:33:35.528235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.035 [2024-12-06 16:33:35.528241] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.035 [2024-12-06 16:33:35.528247] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.035 [2024-12-06 16:33:35.538598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.035 qpair failed and we were unable to recover it. 00:20:41.035 [2024-12-06 16:33:35.548263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.035 [2024-12-06 16:33:35.548303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.035 [2024-12-06 16:33:35.548319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.035 [2024-12-06 16:33:35.548326] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.035 [2024-12-06 16:33:35.548331] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.035 [2024-12-06 16:33:35.558979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.035 qpair failed and we were unable to recover it. 00:20:41.035 [2024-12-06 16:33:35.568373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.035 [2024-12-06 16:33:35.568414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.035 [2024-12-06 16:33:35.568429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.035 [2024-12-06 16:33:35.568436] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.035 [2024-12-06 16:33:35.568442] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.035 [2024-12-06 16:33:35.578761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.035 qpair failed and we were unable to recover it. 00:20:41.035 [2024-12-06 16:33:35.588410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.035 [2024-12-06 16:33:35.588447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.035 [2024-12-06 16:33:35.588462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.035 [2024-12-06 16:33:35.588469] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.035 [2024-12-06 16:33:35.588475] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.035 [2024-12-06 16:33:35.598745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.035 qpair failed and we were unable to recover it. 00:20:41.035 [2024-12-06 16:33:35.608448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.035 [2024-12-06 16:33:35.608488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.035 [2024-12-06 16:33:35.608504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.035 [2024-12-06 16:33:35.608510] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.035 [2024-12-06 16:33:35.608515] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.035 [2024-12-06 16:33:35.618680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.036 qpair failed and we were unable to recover it. 00:20:41.036 [2024-12-06 16:33:35.628506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.036 [2024-12-06 16:33:35.628545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.036 [2024-12-06 16:33:35.628561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.036 [2024-12-06 16:33:35.628567] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.036 [2024-12-06 16:33:35.628572] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.036 [2024-12-06 16:33:35.638995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.036 qpair failed and we were unable to recover it. 00:20:41.036 [2024-12-06 16:33:35.648684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.036 [2024-12-06 16:33:35.648721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.036 [2024-12-06 16:33:35.648737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.036 [2024-12-06 16:33:35.648743] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.036 [2024-12-06 16:33:35.648749] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.036 [2024-12-06 16:33:35.658856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.036 qpair failed and we were unable to recover it. 00:20:41.036 [2024-12-06 16:33:35.668522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.036 [2024-12-06 16:33:35.668556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.036 [2024-12-06 16:33:35.668571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.036 [2024-12-06 16:33:35.668577] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.036 [2024-12-06 16:33:35.668583] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.036 [2024-12-06 16:33:35.679191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.036 qpair failed and we were unable to recover it. 00:20:41.036 [2024-12-06 16:33:35.688848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.036 [2024-12-06 16:33:35.688893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.036 [2024-12-06 16:33:35.688908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.036 [2024-12-06 16:33:35.688915] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.036 [2024-12-06 16:33:35.688920] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.036 [2024-12-06 16:33:35.698940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.036 qpair failed and we were unable to recover it. 00:20:41.036 [2024-12-06 16:33:35.708781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.036 [2024-12-06 16:33:35.708819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.036 [2024-12-06 16:33:35.708838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.036 [2024-12-06 16:33:35.708844] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.036 [2024-12-06 16:33:35.708850] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.036 [2024-12-06 16:33:35.719041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.036 qpair failed and we were unable to recover it. 00:20:41.036 [2024-12-06 16:33:35.728875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.036 [2024-12-06 16:33:35.728918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.036 [2024-12-06 16:33:35.728934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.036 [2024-12-06 16:33:35.728940] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.036 [2024-12-06 16:33:35.728946] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.036 [2024-12-06 16:33:35.739100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.036 qpair failed and we were unable to recover it. 00:20:41.036 [2024-12-06 16:33:35.748878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.036 [2024-12-06 16:33:35.748920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.036 [2024-12-06 16:33:35.748936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.036 [2024-12-06 16:33:35.748942] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.036 [2024-12-06 16:33:35.748948] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.036 [2024-12-06 16:33:35.759058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.036 qpair failed and we were unable to recover it. 00:20:41.293 [2024-12-06 16:33:35.768974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.293 [2024-12-06 16:33:35.769018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.293 [2024-12-06 16:33:35.769032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.293 [2024-12-06 16:33:35.769039] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.293 [2024-12-06 16:33:35.769045] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.293 [2024-12-06 16:33:35.779096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.293 qpair failed and we were unable to recover it. 00:20:41.293 [2024-12-06 16:33:35.789061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.293 [2024-12-06 16:33:35.789102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.789117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.789128] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.789133] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.799251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:35.809215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:35.809256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.809272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.809278] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.809283] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.819322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:35.829137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:35.829178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.829192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.829198] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.829204] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.839493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:35.849190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:35.849232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.849248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.849254] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.849260] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.859616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:35.869282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:35.869321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.869337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.869344] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.869350] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.879525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:35.889330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:35.889370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.889391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.889398] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.889403] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.899654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:35.909441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:35.909484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.909500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.909507] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.909512] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.919571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:35.929452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:35.929489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.929504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.929511] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.929517] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.939743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:35.949476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:35.949515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.949531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.949537] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.949543] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.959868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:35.969607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:35.969650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.969666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.969672] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.969678] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.979767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:35.989630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:35.989665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:35.989680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:35.989686] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:35.989692] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:35.999937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.294 [2024-12-06 16:33:36.009672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.294 [2024-12-06 16:33:36.009711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.294 [2024-12-06 16:33:36.009726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.294 [2024-12-06 16:33:36.009733] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.294 [2024-12-06 16:33:36.009738] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.294 [2024-12-06 16:33:36.020096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.294 qpair failed and we were unable to recover it. 00:20:41.553 [2024-12-06 16:33:36.029676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.553 [2024-12-06 16:33:36.029719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.553 [2024-12-06 16:33:36.029734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.553 [2024-12-06 16:33:36.029740] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.553 [2024-12-06 16:33:36.029746] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.553 [2024-12-06 16:33:36.040099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.553 qpair failed and we were unable to recover it. 00:20:41.553 [2024-12-06 16:33:36.049854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.553 [2024-12-06 16:33:36.049891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.553 [2024-12-06 16:33:36.049909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.553 [2024-12-06 16:33:36.049916] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.553 [2024-12-06 16:33:36.049921] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.553 [2024-12-06 16:33:36.060069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.553 qpair failed and we were unable to recover it. 00:20:41.553 [2024-12-06 16:33:36.069930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.553 [2024-12-06 16:33:36.069965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.553 [2024-12-06 16:33:36.069981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.553 [2024-12-06 16:33:36.069988] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.553 [2024-12-06 16:33:36.069993] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.553 [2024-12-06 16:33:36.080127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.553 qpair failed and we were unable to recover it. 00:20:41.553 [2024-12-06 16:33:36.089872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.553 [2024-12-06 16:33:36.089909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.553 [2024-12-06 16:33:36.089924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.553 [2024-12-06 16:33:36.089931] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.553 [2024-12-06 16:33:36.089936] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.553 [2024-12-06 16:33:36.100279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.553 qpair failed and we were unable to recover it. 00:20:41.553 [2024-12-06 16:33:36.109991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.553 [2024-12-06 16:33:36.110032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.553 [2024-12-06 16:33:36.110047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.553 [2024-12-06 16:33:36.110054] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.553 [2024-12-06 16:33:36.110059] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.553 [2024-12-06 16:33:36.120518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.553 qpair failed and we were unable to recover it. 00:20:41.553 [2024-12-06 16:33:36.129996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.553 [2024-12-06 16:33:36.130029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.553 [2024-12-06 16:33:36.130045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.553 [2024-12-06 16:33:36.130056] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.553 [2024-12-06 16:33:36.130061] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.553 [2024-12-06 16:33:36.140350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.553 qpair failed and we were unable to recover it. 00:20:41.553 [2024-12-06 16:33:36.150004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.553 [2024-12-06 16:33:36.150041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.553 [2024-12-06 16:33:36.150057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.553 [2024-12-06 16:33:36.150063] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.553 [2024-12-06 16:33:36.150069] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.553 [2024-12-06 16:33:36.160399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.553 qpair failed and we were unable to recover it. 00:20:41.553 [2024-12-06 16:33:36.170099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.553 [2024-12-06 16:33:36.170140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.553 [2024-12-06 16:33:36.170155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.553 [2024-12-06 16:33:36.170162] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.553 [2024-12-06 16:33:36.170168] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.553 [2024-12-06 16:33:36.180421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.553 qpair failed and we were unable to recover it. 00:20:41.553 [2024-12-06 16:33:36.190274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.553 [2024-12-06 16:33:36.190313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.553 [2024-12-06 16:33:36.190329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.553 [2024-12-06 16:33:36.190336] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.553 [2024-12-06 16:33:36.190341] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.553 [2024-12-06 16:33:36.200913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.553 qpair failed and we were unable to recover it. 00:20:41.553 [2024-12-06 16:33:36.210199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.553 [2024-12-06 16:33:36.210240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.553 [2024-12-06 16:33:36.210255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.553 [2024-12-06 16:33:36.210262] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.554 [2024-12-06 16:33:36.210267] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.554 [2024-12-06 16:33:36.220607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.554 qpair failed and we were unable to recover it. 00:20:41.554 [2024-12-06 16:33:36.230244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.554 [2024-12-06 16:33:36.230285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.554 [2024-12-06 16:33:36.230301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.554 [2024-12-06 16:33:36.230308] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.554 [2024-12-06 16:33:36.230313] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.554 [2024-12-06 16:33:36.240732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.554 qpair failed and we were unable to recover it. 00:20:41.554 [2024-12-06 16:33:36.250387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.554 [2024-12-06 16:33:36.250421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.554 [2024-12-06 16:33:36.250438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.554 [2024-12-06 16:33:36.250444] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.554 [2024-12-06 16:33:36.250450] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.554 [2024-12-06 16:33:36.260502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.554 qpair failed and we were unable to recover it. 00:20:41.554 [2024-12-06 16:33:36.270409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.554 [2024-12-06 16:33:36.270446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.554 [2024-12-06 16:33:36.270462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.554 [2024-12-06 16:33:36.270469] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.554 [2024-12-06 16:33:36.270475] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.812 [2024-12-06 16:33:36.280892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.290417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.290454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.290469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.290475] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.290481] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.300834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.310487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.310523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.310538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.310544] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.310550] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.320853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.330549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.330583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.330599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.330605] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.330611] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.340957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.350604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.350642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.350658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.350665] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.350670] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.361052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.370803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.370842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.370858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.370864] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.370870] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.381069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.390783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.390818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.390837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.390844] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.390849] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.401126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.410818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.410852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.410869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.410875] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.410881] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.421132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.430980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.431017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.431033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.431040] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.431045] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.441204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.451005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.451045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.451061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.451067] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.451072] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.461220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.470924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.470960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.470976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.470982] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.470991] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.481297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.491229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.491269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.491285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.491292] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.491298] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.501373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.511164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.511204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.511220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.813 [2024-12-06 16:33:36.511227] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.813 [2024-12-06 16:33:36.511233] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:41.813 [2024-12-06 16:33:36.521419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:41.813 qpair failed and we were unable to recover it. 00:20:41.813 [2024-12-06 16:33:36.531268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:41.813 [2024-12-06 16:33:36.531309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:41.813 [2024-12-06 16:33:36.531324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:41.814 [2024-12-06 16:33:36.531330] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:41.814 [2024-12-06 16:33:36.531336] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.072 [2024-12-06 16:33:36.541313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.072 qpair failed and we were unable to recover it. 00:20:42.072 [2024-12-06 16:33:36.551239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.072 [2024-12-06 16:33:36.551280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.072 [2024-12-06 16:33:36.551295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.072 [2024-12-06 16:33:36.551301] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.072 [2024-12-06 16:33:36.551307] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.561527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.571314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.571354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.571370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.571381] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.571387] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.581727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.591264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.591302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.591319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.591328] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.591334] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.601615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.611284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.611319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.611334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.611341] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.611347] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.621673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.631440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.631474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.631489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.631496] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.631502] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.641832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.651556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.651600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.651615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.651622] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.651628] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.661823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.671578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.671615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.671630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.671636] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.671642] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.681827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.691640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.691676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.691692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.691699] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.691704] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.701929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.711755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.711792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.711807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.711814] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.711820] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.722065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.731803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.731841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.731860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.731866] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.731872] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.741989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.751880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.751919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.751934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.751941] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.751947] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.762223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.771864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.771902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.771917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.771924] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.771929] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.073 [2024-12-06 16:33:36.782197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.073 qpair failed and we were unable to recover it. 00:20:42.073 [2024-12-06 16:33:36.791945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.073 [2024-12-06 16:33:36.791979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.073 [2024-12-06 16:33:36.791995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.073 [2024-12-06 16:33:36.792002] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.073 [2024-12-06 16:33:36.792008] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.332 [2024-12-06 16:33:36.802226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.332 qpair failed and we were unable to recover it. 00:20:42.332 [2024-12-06 16:33:36.811882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.332 [2024-12-06 16:33:36.811918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.332 [2024-12-06 16:33:36.811933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.332 [2024-12-06 16:33:36.811940] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.332 [2024-12-06 16:33:36.811948] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.332 [2024-12-06 16:33:36.822240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.332 qpair failed and we were unable to recover it. 00:20:42.332 [2024-12-06 16:33:36.832123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.332 [2024-12-06 16:33:36.832161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.332 [2024-12-06 16:33:36.832177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.332 [2024-12-06 16:33:36.832184] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.332 [2024-12-06 16:33:36.832189] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.332 [2024-12-06 16:33:36.842787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.332 qpair failed and we were unable to recover it. 00:20:42.332 [2024-12-06 16:33:36.852147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.332 [2024-12-06 16:33:36.852186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.332 [2024-12-06 16:33:36.852201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.332 [2024-12-06 16:33:36.852208] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.332 [2024-12-06 16:33:36.852214] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.332 [2024-12-06 16:33:36.862448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.332 qpair failed and we were unable to recover it. 00:20:42.332 [2024-12-06 16:33:36.872186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.332 [2024-12-06 16:33:36.872222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.332 [2024-12-06 16:33:36.872238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.332 [2024-12-06 16:33:36.872245] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.332 [2024-12-06 16:33:36.872250] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.332 [2024-12-06 16:33:36.882543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.332 qpair failed and we were unable to recover it. 00:20:42.332 [2024-12-06 16:33:36.892128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.332 [2024-12-06 16:33:36.892165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.332 [2024-12-06 16:33:36.892180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.332 [2024-12-06 16:33:36.892187] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.332 [2024-12-06 16:33:36.892193] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.332 [2024-12-06 16:33:36.902370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.332 qpair failed and we were unable to recover it. 00:20:42.332 [2024-12-06 16:33:36.912204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:42.332 [2024-12-06 16:33:36.912244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:42.332 [2024-12-06 16:33:36.912261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:42.332 [2024-12-06 16:33:36.912267] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:42.332 [2024-12-06 16:33:36.912273] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:42.332 [2024-12-06 16:33:36.922699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:42.332 qpair failed and we were unable to recover it. 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Read completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.263 Write completed with error (sct=0, sc=8) 00:20:43.263 starting I/O failed 00:20:43.264 Write completed with error (sct=0, sc=8) 00:20:43.264 starting I/O failed 00:20:43.264 Read completed with error (sct=0, sc=8) 00:20:43.264 starting I/O failed 00:20:43.264 Read completed with error (sct=0, sc=8) 00:20:43.264 starting I/O failed 00:20:43.264 Write completed with error (sct=0, sc=8) 00:20:43.264 starting I/O failed 00:20:43.264 Write completed with error (sct=0, sc=8) 00:20:43.264 starting I/O failed 00:20:43.264 Read completed with error (sct=0, sc=8) 00:20:43.264 starting I/O failed 00:20:43.264 Write completed with error (sct=0, sc=8) 00:20:43.264 starting I/O failed 00:20:43.264 Write completed with error (sct=0, sc=8) 00:20:43.264 starting I/O failed 00:20:43.264 Write completed with error (sct=0, sc=8) 00:20:43.264 starting I/O failed 00:20:43.264 [2024-12-06 16:33:37.927479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:43.264 [2024-12-06 16:33:37.934945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.264 [2024-12-06 16:33:37.934981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.264 [2024-12-06 16:33:37.934997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.264 [2024-12-06 16:33:37.935004] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.264 [2024-12-06 16:33:37.935010] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b4d80 00:20:43.264 [2024-12-06 16:33:37.945561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:43.264 qpair failed and we were unable to recover it. 00:20:43.264 [2024-12-06 16:33:37.955235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:43.264 [2024-12-06 16:33:37.955267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:43.264 [2024-12-06 16:33:37.955283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:43.264 [2024-12-06 16:33:37.955289] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:43.264 [2024-12-06 16:33:37.955295] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b4d80 00:20:43.264 [2024-12-06 16:33:37.965552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:43.264 qpair failed and we were unable to recover it. 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Write completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 Read completed with error (sct=0, sc=8) 00:20:44.634 starting I/O failed 00:20:44.634 [2024-12-06 16:33:38.970560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.634 [2024-12-06 16:33:38.977883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.634 [2024-12-06 16:33:38.977926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.634 [2024-12-06 16:33:38.977942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.634 [2024-12-06 16:33:38.977949] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.634 [2024-12-06 16:33:38.977955] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:20:44.634 [2024-12-06 16:33:38.988603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.634 qpair failed and we were unable to recover it. 00:20:44.634 [2024-12-06 16:33:38.998291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.634 [2024-12-06 16:33:38.998332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.634 [2024-12-06 16:33:38.998347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.634 [2024-12-06 16:33:38.998354] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.634 [2024-12-06 16:33:38.998359] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:20:44.635 [2024-12-06 16:33:39.008685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:44.635 qpair failed and we were unable to recover it. 00:20:44.635 [2024-12-06 16:33:39.008780] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:20:44.635 A controller has encountered a failure and is being reset. 00:20:44.635 [2024-12-06 16:33:39.018444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.635 [2024-12-06 16:33:39.018487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.635 [2024-12-06 16:33:39.018514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.635 [2024-12-06 16:33:39.018528] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.635 [2024-12-06 16:33:39.018541] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:20:44.635 [2024-12-06 16:33:39.028625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.635 qpair failed and we were unable to recover it. 00:20:44.635 [2024-12-06 16:33:39.038425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:44.635 [2024-12-06 16:33:39.038465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:44.635 [2024-12-06 16:33:39.038484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:44.635 [2024-12-06 16:33:39.038493] nvme_rdma.c:1363:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:44.635 [2024-12-06 16:33:39.038502] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:20:44.635 [2024-12-06 16:33:39.048749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:44.635 qpair failed and we were unable to recover it. 00:20:44.635 [2024-12-06 16:33:39.048871] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:20:44.635 [2024-12-06 16:33:39.081926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:20:44.635 Controller properly reset. 00:20:44.635 Initializing NVMe Controllers 00:20:44.635 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.635 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.635 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:20:44.635 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:20:44.635 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:20:44.635 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:20:44.635 Initialization complete. Launching workers. 00:20:44.635 Starting thread on core 1 00:20:44.635 Starting thread on core 2 00:20:44.635 Starting thread on core 3 00:20:44.635 Starting thread on core 0 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:20:44.635 00:20:44.635 real 0m13.537s 00:20:44.635 user 0m29.581s 00:20:44.635 sys 0m2.687s 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.635 ************************************ 00:20:44.635 END TEST nvmf_target_disconnect_tc2 00:20:44.635 ************************************ 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:44.635 ************************************ 00:20:44.635 START TEST nvmf_target_disconnect_tc3 00:20:44.635 ************************************ 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3887196 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:20:44.635 16:33:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:20:46.532 16:33:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3885591 00:20:46.532 16:33:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Write completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 Read completed with error (sct=0, sc=8) 00:20:47.905 starting I/O failed 00:20:47.905 [2024-12-06 16:33:42.355247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:20:48.841 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3885591 Killed "${NVMF_APP[@]}" "$@" 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3887864 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3887864 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3887864 ']' 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.841 [2024-12-06 16:33:43.258756] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:20:48.841 [2024-12-06 16:33:43.258803] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.841 [2024-12-06 16:33:43.333203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Write completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 Read completed with error (sct=0, sc=8) 00:20:48.841 starting I/O failed 00:20:48.841 [2024-12-06 16:33:43.360159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:20:48.841 [2024-12-06 16:33:43.361685] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:48.841 [2024-12-06 16:33:43.361704] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:48.841 [2024-12-06 16:33:43.361711] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:48.841 [2024-12-06 16:33:43.370860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.841 [2024-12-06 16:33:43.370886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.841 [2024-12-06 16:33:43.370893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.841 [2024-12-06 16:33:43.370899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.841 [2024-12-06 16:33:43.370904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.841 [2024-12-06 16:33:43.372177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:48.841 [2024-12-06 16:33:43.372284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:48.841 [2024-12-06 16:33:43.372404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:48.841 [2024-12-06 16:33:43.372405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.841 Malloc0 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.841 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.101 [2024-12-06 16:33:43.570619] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15dd170/0x15e8e50) succeed. 00:20:49.101 [2024-12-06 16:33:43.579128] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15de800/0x162a4f0) succeed. 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.101 [2024-12-06 16:33:43.714911] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.101 16:33:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3887196 00:20:49.668 [2024-12-06 16:33:44.365667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:20:49.668 qpair failed and we were unable to recover it. 00:20:49.668 [2024-12-06 16:33:44.367143] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:49.668 [2024-12-06 16:33:44.367160] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:49.668 [2024-12-06 16:33:44.367166] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:51.102 [2024-12-06 16:33:45.370902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:20:51.102 qpair failed and we were unable to recover it. 00:20:51.102 [2024-12-06 16:33:45.372218] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.102 [2024-12-06 16:33:45.372234] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.102 [2024-12-06 16:33:45.372240] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:51.701 [2024-12-06 16:33:46.376068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:20:51.701 qpair failed and we were unable to recover it. 00:20:51.701 [2024-12-06 16:33:46.377327] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:51.701 [2024-12-06 16:33:46.377343] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:51.701 [2024-12-06 16:33:46.377349] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:53.073 [2024-12-06 16:33:47.381187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:20:53.074 qpair failed and we were unable to recover it. 00:20:53.074 [2024-12-06 16:33:47.382545] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:53.074 [2024-12-06 16:33:47.382560] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:53.074 [2024-12-06 16:33:47.382566] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:54.006 [2024-12-06 16:33:48.386256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:20:54.006 qpair failed and we were unable to recover it. 00:20:54.006 [2024-12-06 16:33:48.387566] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:54.006 [2024-12-06 16:33:48.387582] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:54.006 [2024-12-06 16:33:48.387588] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:54.941 [2024-12-06 16:33:49.391220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:20:54.941 qpair failed and we were unable to recover it. 00:20:54.941 [2024-12-06 16:33:49.392619] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:54.941 [2024-12-06 16:33:49.392634] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:54.941 [2024-12-06 16:33:49.392640] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd800 00:20:55.873 [2024-12-06 16:33:50.396403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:20:55.873 qpair failed and we were unable to recover it. 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Write completed with error (sct=0, sc=8) 00:20:56.804 starting I/O failed 00:20:56.804 Read completed with error (sct=0, sc=8) 00:20:56.805 starting I/O failed 00:20:56.805 Read completed with error (sct=0, sc=8) 00:20:56.805 starting I/O failed 00:20:56.805 Write completed with error (sct=0, sc=8) 00:20:56.805 starting I/O failed 00:20:56.805 Read completed with error (sct=0, sc=8) 00:20:56.805 starting I/O failed 00:20:56.805 Write completed with error (sct=0, sc=8) 00:20:56.805 starting I/O failed 00:20:56.805 [2024-12-06 16:33:51.401424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Write completed with error (sct=0, sc=8) 00:20:57.734 starting I/O failed 00:20:57.734 Read completed with error (sct=0, sc=8) 00:20:57.735 starting I/O failed 00:20:57.735 [2024-12-06 16:33:52.406284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:20:57.735 [2024-12-06 16:33:52.407676] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:57.735 [2024-12-06 16:33:52.407692] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:57.735 [2024-12-06 16:33:52.407698] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b4d80 00:20:59.102 [2024-12-06 16:33:53.411490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:20:59.102 qpair failed and we were unable to recover it. 00:20:59.102 [2024-12-06 16:33:53.412878] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:59.102 [2024-12-06 16:33:53.412893] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:59.102 [2024-12-06 16:33:53.412899] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b4d80 00:21:00.035 [2024-12-06 16:33:54.416923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:21:00.035 qpair failed and we were unable to recover it. 00:21:00.035 [2024-12-06 16:33:54.418420] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:00.035 [2024-12-06 16:33:54.418439] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:00.035 [2024-12-06 16:33:54.418445] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:21:00.967 [2024-12-06 16:33:55.422156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.967 qpair failed and we were unable to recover it. 00:21:00.967 [2024-12-06 16:33:55.423566] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:00.967 [2024-12-06 16:33:55.423581] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:00.967 [2024-12-06 16:33:55.423586] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cd580 00:21:01.899 [2024-12-06 16:33:56.427261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.899 qpair failed and we were unable to recover it. 00:21:01.899 [2024-12-06 16:33:56.427460] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:21:01.899 A controller has encountered a failure and is being reset. 00:21:01.899 Resorting to new failover address 192.168.100.9 00:21:01.899 [2024-12-06 16:33:56.428982] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:01.899 [2024-12-06 16:33:56.429004] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:01.899 [2024-12-06 16:33:56.429016] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:21:02.834 [2024-12-06 16:33:57.432716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.834 qpair failed and we were unable to recover it. 00:21:02.834 [2024-12-06 16:33:57.434157] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:02.834 [2024-12-06 16:33:57.434173] nvme_rdma.c:1108:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:02.834 [2024-12-06 16:33:57.434181] nvme_rdma.c:2988:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d2b40 00:21:03.769 [2024-12-06 16:33:58.437967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:21:03.769 qpair failed and we were unable to recover it. 00:21:03.769 [2024-12-06 16:33:58.438102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:03.769 [2024-12-06 16:33:58.438200] nvme_rdma.c: 567:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:21:03.769 [2024-12-06 16:33:58.470084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:03.769 Controller properly reset. 00:21:04.027 Initializing NVMe Controllers 00:21:04.027 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.028 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:04.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:04.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:04.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:04.028 Initialization complete. Launching workers. 00:21:04.028 Starting thread on core 1 00:21:04.028 Starting thread on core 2 00:21:04.028 Starting thread on core 3 00:21:04.028 Starting thread on core 0 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:21:04.028 00:21:04.028 real 0m19.327s 00:21:04.028 user 1m9.391s 00:21:04.028 sys 0m4.567s 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.028 ************************************ 00:21:04.028 END TEST nvmf_target_disconnect_tc3 00:21:04.028 ************************************ 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:04.028 rmmod nvme_rdma 00:21:04.028 rmmod nvme_fabrics 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3887864 ']' 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3887864 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3887864 ']' 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3887864 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3887864 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3887864' 00:21:04.028 killing process with pid 3887864 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3887864 00:21:04.028 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3887864 00:21:04.286 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:04.286 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:04.286 00:21:04.286 real 0m40.403s 00:21:04.286 user 2m43.306s 00:21:04.286 sys 0m12.157s 00:21:04.286 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.287 16:33:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:04.287 ************************************ 00:21:04.287 END TEST nvmf_target_disconnect 00:21:04.287 ************************************ 00:21:04.287 16:33:58 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:04.287 00:21:04.287 real 5m11.401s 00:21:04.287 user 12m44.413s 00:21:04.287 sys 1m25.976s 00:21:04.287 16:33:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.287 16:33:58 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.287 ************************************ 00:21:04.287 END TEST nvmf_host 00:21:04.287 ************************************ 00:21:04.287 16:33:58 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:21:04.287 00:21:04.287 real 16m8.031s 00:21:04.287 user 40m30.943s 00:21:04.287 sys 4m35.869s 00:21:04.287 16:33:58 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.287 16:33:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:04.287 ************************************ 00:21:04.287 END TEST nvmf_rdma 00:21:04.287 ************************************ 00:21:04.287 16:33:58 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:21:04.287 16:33:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:04.287 16:33:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.287 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:21:04.287 ************************************ 00:21:04.287 START TEST spdkcli_nvmf_rdma 00:21:04.287 ************************************ 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:21:04.546 * Looking for test storage... 00:21:04.546 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:04.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.546 --rc genhtml_branch_coverage=1 00:21:04.546 --rc genhtml_function_coverage=1 00:21:04.546 --rc genhtml_legend=1 00:21:04.546 --rc geninfo_all_blocks=1 00:21:04.546 --rc geninfo_unexecuted_blocks=1 00:21:04.546 00:21:04.546 ' 00:21:04.546 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:04.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.546 --rc genhtml_branch_coverage=1 00:21:04.546 --rc genhtml_function_coverage=1 00:21:04.546 --rc genhtml_legend=1 00:21:04.546 --rc geninfo_all_blocks=1 00:21:04.547 --rc geninfo_unexecuted_blocks=1 00:21:04.547 00:21:04.547 ' 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:04.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.547 --rc genhtml_branch_coverage=1 00:21:04.547 --rc genhtml_function_coverage=1 00:21:04.547 --rc genhtml_legend=1 00:21:04.547 --rc geninfo_all_blocks=1 00:21:04.547 --rc geninfo_unexecuted_blocks=1 00:21:04.547 00:21:04.547 ' 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:04.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.547 --rc genhtml_branch_coverage=1 00:21:04.547 --rc genhtml_function_coverage=1 00:21:04.547 --rc genhtml_legend=1 00:21:04.547 --rc geninfo_all_blocks=1 00:21:04.547 --rc geninfo_unexecuted_blocks=1 00:21:04.547 00:21:04.547 ' 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3890743 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3890743 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 3890743 ']' 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.547 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:04.547 [2024-12-06 16:33:59.264411] Starting SPDK v25.01-pre git sha1 f9a92382f / DPDK 24.03.0 initialization... 00:21:04.547 [2024-12-06 16:33:59.264464] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890743 ] 00:21:04.806 [2024-12-06 16:33:59.323362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:04.806 [2024-12-06 16:33:59.363677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.806 [2024-12-06 16:33:59.363680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.806 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.806 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:21:04.806 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:04.806 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.806 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:04.806 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:04.806 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:21:04.806 16:33:59 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:21:04.806 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:04.806 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.807 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.807 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.807 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.807 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.807 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:04.807 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.807 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.807 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.807 16:33:59 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.807 16:33:59 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:11.371 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:11.371 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:11.371 Found net devices under 0000:18:00.0: mlx_0_0 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:11.371 Found net devices under 0000:18:00.1: mlx_0_1 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:11.371 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:11.372 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:11.372 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:21:11.372 altname enp24s0f0np0 00:21:11.372 altname ens785f0np0 00:21:11.372 inet 192.168.100.8/24 scope global mlx_0_0 00:21:11.372 valid_lft forever preferred_lft forever 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:11.372 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:11.372 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:21:11.372 altname enp24s0f1np1 00:21:11.372 altname ens785f1np1 00:21:11.372 inet 192.168.100.9/24 scope global mlx_0_1 00:21:11.372 valid_lft forever preferred_lft forever 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:11.372 192.168.100.9' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:11.372 192.168.100.9' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:11.372 192.168.100.9' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:11.372 16:34:05 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:11.372 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:11.372 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:11.372 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:11.372 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:11.372 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:11.372 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:11.372 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:11.372 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:11.372 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:11.372 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:11.372 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:11.372 ' 00:21:13.273 [2024-12-06 16:34:07.698294] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x962440/0x970440) succeed. 00:21:13.273 [2024-12-06 16:34:07.706692] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x963b20/0x9f0480) succeed. 00:21:14.649 [2024-12-06 16:34:08.969244] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:21:16.550 [2024-12-06 16:34:11.200172] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:21:18.445 [2024-12-06 16:34:13.118357] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:21:20.347 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:20.347 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:20.347 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:20.347 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:20.347 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:20.347 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:20.347 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:20.347 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:21:20.347 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:21:20.347 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:20.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:20.347 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:20.347 16:34:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:20.347 16:34:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.347 16:34:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:20.347 16:34:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:20.347 16:34:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.347 16:34:14 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:20.347 16:34:14 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:21:20.347 16:34:14 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:21:20.611 16:34:15 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:20.611 16:34:15 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:20.611 16:34:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:20.611 16:34:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.611 16:34:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:20.611 16:34:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:20.611 16:34:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.611 16:34:15 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:20.611 16:34:15 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:20.611 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:20.611 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:20.611 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:20.611 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:21:20.611 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:21:20.611 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:20.611 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:20.611 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:20.611 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:20.611 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:20.611 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:20.611 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:20.611 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:20.611 ' 00:21:25.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:25.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:25.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:25.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:25.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:21:25.879 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:21:25.879 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:25.879 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:25.879 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:25.879 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:25.879 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:25.879 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:25.879 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:25.879 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3890743 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 3890743 ']' 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 3890743 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3890743 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3890743' 00:21:25.879 killing process with pid 3890743 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 3890743 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 3890743 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.879 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:25.879 rmmod nvme_rdma 00:21:25.879 rmmod nvme_fabrics 00:21:26.139 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:26.139 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:21:26.139 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:21:26.139 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:21:26.139 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:26.139 16:34:20 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:26.139 00:21:26.139 real 0m21.602s 00:21:26.139 user 0m45.759s 00:21:26.139 sys 0m4.988s 00:21:26.139 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.139 16:34:20 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:26.139 ************************************ 00:21:26.139 END TEST spdkcli_nvmf_rdma 00:21:26.139 ************************************ 00:21:26.139 16:34:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:26.139 16:34:20 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:26.139 16:34:20 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:26.140 16:34:20 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:26.140 16:34:20 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:26.140 16:34:20 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:26.140 16:34:20 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:26.140 16:34:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:26.140 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:21:26.140 16:34:20 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:26.140 16:34:20 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:26.140 16:34:20 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:26.140 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:21:31.404 INFO: APP EXITING 00:21:31.404 INFO: killing all VMs 00:21:31.404 INFO: killing vhost app 00:21:31.404 INFO: EXIT DONE 00:21:32.774 Waiting for block devices as requested 00:21:32.774 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:33.032 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:33.032 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:33.032 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:33.032 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:33.290 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:33.290 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:33.290 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:33.290 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:33.547 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:33.547 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:33.547 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:33.547 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:33.804 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:33.804 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:33.804 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:34.062 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:21:38.248 Cleaning 00:21:38.248 Removing: /var/run/dpdk/spdk0/config 00:21:38.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:38.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:38.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:38.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:38.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:21:38.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:21:38.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:21:38.248 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:21:38.248 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:38.248 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:38.248 Removing: /var/run/dpdk/spdk1/config 00:21:38.248 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:38.248 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:38.248 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:38.248 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:38.248 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:21:38.248 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:21:38.248 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:21:38.248 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:21:38.248 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:38.248 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:38.248 Removing: /var/run/dpdk/spdk1/mp_socket 00:21:38.248 Removing: /var/run/dpdk/spdk2/config 00:21:38.248 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:38.248 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:38.248 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:38.248 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:38.248 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:21:38.248 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:21:38.248 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:21:38.248 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:21:38.248 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:38.248 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:38.248 Removing: /var/run/dpdk/spdk3/config 00:21:38.248 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:38.248 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:38.248 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:38.248 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:38.248 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:21:38.248 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:21:38.248 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:21:38.248 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:21:38.248 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:38.248 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:38.248 Removing: /var/run/dpdk/spdk4/config 00:21:38.248 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:38.248 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:38.248 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:38.248 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:38.248 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:21:38.248 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:21:38.248 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:21:38.248 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:21:38.248 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:38.248 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:38.248 Removing: /dev/shm/bdevperf_trace.pid3636357 00:21:38.248 Removing: /dev/shm/bdev_svc_trace.1 00:21:38.248 Removing: /dev/shm/nvmf_trace.0 00:21:38.248 Removing: /dev/shm/spdk_tgt_trace.pid3591488 00:21:38.248 Removing: /var/run/dpdk/spdk0 00:21:38.248 Removing: /var/run/dpdk/spdk1 00:21:38.248 Removing: /var/run/dpdk/spdk2 00:21:38.248 Removing: /var/run/dpdk/spdk3 00:21:38.248 Removing: /var/run/dpdk/spdk4 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3588168 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3589763 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3591488 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3591940 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3593013 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3593279 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3594378 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3594393 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3594771 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3599986 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3601906 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3602226 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3602551 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3602887 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3603107 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3603272 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3603528 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3603846 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3604678 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3607814 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3608102 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3608390 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3608393 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3608915 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3608962 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3609366 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3609521 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3609782 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3609819 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3610084 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3610117 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3610624 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3610807 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3611170 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3615084 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3619788 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3630668 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3631473 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3636357 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3636630 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3640715 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3646658 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3649614 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3659427 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3684785 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3688530 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3731121 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3737042 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3742705 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3751511 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3792085 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3792958 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3794063 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3795123 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3799729 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3806067 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3813128 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3814093 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3814977 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3816015 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3816537 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3820865 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3820867 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3825392 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3826003 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3826531 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3827440 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3827499 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3833073 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3833725 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3837902 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3840793 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3846249 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3856911 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3856916 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3877508 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3877830 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3884485 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3884808 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3887196 00:21:38.248 Removing: /var/run/dpdk/spdk_pid3890743 00:21:38.248 Clean 00:21:38.248 16:34:32 -- common/autotest_common.sh@1453 -- # return 0 00:21:38.248 16:34:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:38.248 16:34:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.248 16:34:32 -- common/autotest_common.sh@10 -- # set +x 00:21:38.248 16:34:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:38.248 16:34:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.248 16:34:32 -- common/autotest_common.sh@10 -- # set +x 00:21:38.248 16:34:32 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:21:38.248 16:34:32 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:21:38.248 16:34:32 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:21:38.248 16:34:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:38.248 16:34:32 -- spdk/autotest.sh@398 -- # hostname 00:21:38.248 16:34:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-37 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:21:38.248 geninfo: WARNING: invalid characters removed from testname! 00:21:56.322 16:34:50 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:21:58.856 16:34:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:00.232 16:34:54 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:01.657 16:34:56 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:03.647 16:34:57 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:05.024 16:34:59 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:06.401 16:35:01 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:06.401 16:35:01 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:06.401 16:35:01 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:22:06.401 16:35:01 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:06.401 16:35:01 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:06.401 16:35:01 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:22:06.401 + [[ -n 3509803 ]] 00:22:06.401 + sudo kill 3509803 00:22:06.410 [Pipeline] } 00:22:06.424 [Pipeline] // stage 00:22:06.430 [Pipeline] } 00:22:06.443 [Pipeline] // timeout 00:22:06.448 [Pipeline] } 00:22:06.461 [Pipeline] // catchError 00:22:06.466 [Pipeline] } 00:22:06.476 [Pipeline] // wrap 00:22:06.481 [Pipeline] } 00:22:06.489 [Pipeline] // catchError 00:22:06.495 [Pipeline] stage 00:22:06.496 [Pipeline] { (Epilogue) 00:22:06.504 [Pipeline] catchError 00:22:06.505 [Pipeline] { 00:22:06.514 [Pipeline] echo 00:22:06.515 Cleanup processes 00:22:06.520 [Pipeline] sh 00:22:06.800 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:06.800 3905701 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:06.812 [Pipeline] sh 00:22:07.092 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:07.092 ++ grep -v 'sudo pgrep' 00:22:07.092 ++ awk '{print $1}' 00:22:07.092 + sudo kill -9 00:22:07.092 + true 00:22:07.103 [Pipeline] sh 00:22:07.384 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:15.501 [Pipeline] sh 00:22:15.782 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:15.783 Artifacts sizes are good 00:22:15.798 [Pipeline] archiveArtifacts 00:22:15.806 Archiving artifacts 00:22:15.913 [Pipeline] sh 00:22:16.197 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:22:16.210 [Pipeline] cleanWs 00:22:16.220 [WS-CLEANUP] Deleting project workspace... 00:22:16.220 [WS-CLEANUP] Deferred wipeout is used... 00:22:16.227 [WS-CLEANUP] done 00:22:16.228 [Pipeline] } 00:22:16.246 [Pipeline] // catchError 00:22:16.259 [Pipeline] sh 00:22:16.576 + logger -p user.info -t JENKINS-CI 00:22:16.583 [Pipeline] } 00:22:16.596 [Pipeline] // stage 00:22:16.601 [Pipeline] } 00:22:16.615 [Pipeline] // node 00:22:16.621 [Pipeline] End of Pipeline 00:22:16.655 Finished: SUCCESS